1
|
Kim DJ, Nam IC, Kim DR, Kim JJ, Hwang IK, Lee JS, Park SE, Kim H. Detection and position evaluation of chest percutaneous drainage catheter on chest radiographs using deep learning. PLoS One 2024; 19:e0305859. [PMID: 39133733 PMCID: PMC11318879 DOI: 10.1371/journal.pone.0305859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 06/04/2024] [Indexed: 08/15/2024] Open
Abstract
PURPOSE This study aimed to develop an algorithm for the automatic detecting chest percutaneous catheter drainage (PCD) and evaluating catheter positions on chest radiographs using deep learning. METHODS This retrospective study included 1,217 chest radiographs (proper positioned: 937; malpositioned: 280) from a total of 960 patients underwent chest PCD from October 2017 to February 2023. The tip location of the chest PCD was annotated using bounding boxes and classified as proper positioned and malpositioned. The radiographs were randomly allocated into the training, validation sets (total: 1,094 radiographs; proper positioned: 853 radiographs; malpositioned: 241 radiographs), and test datasets (total: 123 radiographs; proper positioned: 84 radiographs; malpositioned: 39 radiographs). The selected AI model was used to detect the catheter tip of chest PCD and evaluate the catheter's position using the test dataset to distinguish between properly positioned and malpositioned cases. Its performance in detecting the catheter and assessing its position on chest radiographs was evaluated by per radiographs and per instances. The association between the position and function of the catheter during chest PCD was evaluated. RESULTS In per chest radiographs, the selected model's accuracy was 0.88. The sensitivity and specificity were 0.86 and 0.92, respectively. In per instance, the selected model's the mean Average Precision 50 (mAP50) was 0.86. The precision and recall were 0.90 and 0.79 respectively. Regarding the association between the position and function of the catheter during chest PCD, its sensitivity and specificity were 0.93 and 0.95, respectively. CONCLUSION The artificial intelligence model for the automatic detection and evaluation of catheter position during chest PCD on chest radiographs demonstrated acceptable diagnostic performance and could assist radiologists and clinicians in the early detection of catheter malposition and malfunction during chest percutaneous catheter drainage.
Collapse
Affiliation(s)
- Duk Ju Kim
- Department of Radiology, Jeju National University School of Medicine, Jeju Natuional University Hospital, Jeju, Republic of Korea
| | - In Chul Nam
- Department of Radiology, Jeju National University School of Medicine, Jeju Natuional University Hospital, Jeju, Republic of Korea
| | - Doo Ri Kim
- Department of Radiology, Jeju National University School of Medicine, Jeju Natuional University Hospital, Jeju, Republic of Korea
| | - Jeong Jae Kim
- Department of Radiology, Jeju National University School of Medicine, Jeju Natuional University Hospital, Jeju, Republic of Korea
| | - Im-kyung Hwang
- Department of Radiology, Jeju National University School of Medicine, Jeju Natuional University Hospital, Jeju, Republic of Korea
| | - Jeong Sub Lee
- Department of Radiology, Jeju National University School of Medicine, Jeju Natuional University Hospital, Jeju, Republic of Korea
| | - Sung Eun Park
- Department of Radiology, Gyeongsang National University School of Medicine and Gyeongsang National University Changwon Hospital, Changwon, Republic of Korea
| | - Hyeonwoo Kim
- Upstage AI, Yongin-si, Gyeonggi-do, Republic of Korea
| |
Collapse
|
2
|
Rueckel J, Huemmer C, Shahidi C, Buizza G, Hoppe BF, Liebig T, Ricke J, Rudolph J, Sabel BO. Artificial Intelligence to Assess Tracheal Tubes and Central Venous Catheters in Chest Radiographs Using an Algorithmic Approach With Adjustable Positioning Definitions. Invest Radiol 2024; 59:306-313. [PMID: 37682731 DOI: 10.1097/rli.0000000000001018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/10/2023]
Abstract
PURPOSE To develop and validate an artificial intelligence algorithm for the positioning assessment of tracheal tubes (TTs) and central venous catheters (CVCs) in supine chest radiographs (SCXRs) by using an algorithm approach allowing for adjustable definitions of intended device positioning. MATERIALS AND METHODS Positioning quality of CVCs and TTs is evaluated by spatially correlating the respective tip positions with anatomical structures. For CVC analysis, a configurable region of interest is defined to approximate the expected region of well-positioned CVC tips from segmentations of anatomical landmarks. The CVC/TT information is estimated by introducing a new multitask neural network architecture for jointly performing type/existence classification, course segmentation, and tip detection. Validation data consisted of 589 SCXRs that have been radiologically annotated for inserted TTs/CVCs, including an experts' categorical positioning assessment (reading 1). In-image positions of algorithm-detected TT/CVC tips could be corrected using a validation software tool (reading 2) that finally allowed for localization accuracy quantification. Algorithmic detection of images with misplaced devices (reading 1 as reference standard) was quantified by receiver operating characteristics. RESULTS Supine chest radiographs were correctly classified according to inserted TTs/CVCs in 100%/98% of the cases, thereby with high accuracy in also spatially localizing the medical device tips: corrections less than 3 mm in >86% (TTs) and 77% (CVCs) of the cases. Chest radiographs with malpositioned devices were detected with area under the curves of >0.98 (TTs), >0.96 (CVCs with accidental vessel turnover), and >0.93 (also suboptimal CVC insertion length considered). The receiver operating characteristics limitations regarding CVC assessment were mainly caused by limitations of the applied CXR position definitions (region of interest derived from anatomical landmarks), not by algorithmic spatial detection inaccuracies. CONCLUSIONS The TT and CVC tips were accurately localized in SCXRs by the presented algorithms, but triaging applications for CVC positioning assessment still suffer from the vague definition of optimal CXR positioning. Our algorithm, however, allows for an adjustment of these criteria, theoretically enabling them to meet user-specific or patient subgroups requirements. Besides CVC tip analysis, future work should also include specific course analysis for accidental vessel turnover detection.
Collapse
Affiliation(s)
- Johannes Rueckel
- From the Department of Radiology, University Hospital, LMU Munich, Munich, Germany (J.Rueckel, C.S., B.F.H., J.Ricke, J.Rudolph, B.O.S.); Institute of Neuroradiology, University Hospital, LMU Munich, Munich, Germany (J.Rueckel, T.L.); and XP Technology and Innovation, Siemens Healthcare GmbH, Forchheim, Germany (C.H., G.B.)
| | | | | | | | | | | | | | | | | |
Collapse
|
3
|
Wang Y, Lam HK, Xu Y, Yin F, Qian K. Multi-task learning framework to predict the status of central venous catheter based on radiographs. Artif Intell Med 2023; 146:102721. [PMID: 38042594 DOI: 10.1016/j.artmed.2023.102721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Revised: 09/29/2023] [Accepted: 11/14/2023] [Indexed: 12/04/2023]
Abstract
Hospital patients can have catheters and lines inserted during the course of their admission to give medicines for the treatment of medical issues, especially the central venous catheter (CVC). However, malposition of CVC will lead to many complications, even death. Clinicians always detect the status of the catheter to avoid the above issues via X-ray images. To reduce the workload of clinicians and improve the efficiency of CVC status detection, a multi-task learning framework for catheter status classification based on the convolutional neural network (CNN) is proposed. The proposed framework contains three significant components which are modified HRNet, multi-task supervision including segmentation supervision and heatmap regression supervision as well as classification branch. The modified HRNet maintaining high-resolution features from the start to the end can ensure to generation of high-quality assisted information for classification. The multi-task supervision can assist in alleviating the presence of other line-like structures such as other tubes and anatomical structures shown in the X-ray image. Furthermore, during the inference, this module is also considered as an interpretation interface to show where the framework pays attention to. Eventually, the classification branch is proposed to predict the class of the status of the catheter. A public CVC dataset is utilized to evaluate the performance of the proposed method, which gains 0.823 AUC (Area under the ROC curve) and 82.6% accuracy in the test dataset. Compared with two state-of-the-art methods (ATCM method and EDMC method), the proposed method can perform best.
Collapse
Affiliation(s)
- Yuhan Wang
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom
| | - Hak Keung Lam
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom.
| | - Yujia Xu
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom
| | - Faliang Yin
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom
| | - Kun Qian
- Center for the Developing Brain, School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Campus, St Thomas' Hospital, Westminster Bridge Road, London, SE1 7EH, United Kingdom
| |
Collapse
|
4
|
Brown MS, Wong KP, Shrestha L, Wahi-Anwar M, Daly M, Foster G, Abtin F, Ruchalski KL, Goldin JG, Enzmann D. Automated Endotracheal Tube Placement Check Using Semantically Embedded Deep Neural Networks. Acad Radiol 2023; 30:412-420. [PMID: 35644754 DOI: 10.1016/j.acra.2022.04.022] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 04/07/2022] [Accepted: 04/22/2022] [Indexed: 01/25/2023]
Abstract
RATIONALE AND OBJECTIVES To develop artificial intelligence (AI) system that assists in checking endotracheal tube (ETT) placement on chest X-rays (CXRs) and evaluate whether it can move into clinical validation as a quality improvement tool. MATERIALS AND METHODS A retrospective data set including 2000 de-identified images from intensive care unit patients was split into 1488 for training and 512 for testing. AI was developed to automatically identify the ETT, trachea, and carina using semantically embedded neural networks that combine a declarative knowledge base with deep neural networks. To check the ETT tip placement, a "safe zone" was computed as the region inside the trachea and 3-7 cm above the carina. Two AI outputs were evaluated: (1) ETT overlay, (2) ETT misplacement alert messages. Clinically relevant performance metrics were compared against prespecified thresholds of >85% overlay accuracy and positive predictive value (PPV) > 30% and negative predictive value NPV > 95% for alerts to move into clinical validation. RESULTS An ETT was present in 285 of 512 test cases. The AI detected 95% (271/285) of ETTs, 233 (86%) of these with accurate tip localization. The system (correctly) did not generate an ETT overlay in 221/227 CXRs where the tube was absent for an overall overlay accuracy of 89% (454/512). The alert messages indicating that either the ETT was misplaced or not detected had a PPV of 83% (265/320) and NPV of 98% (188/192). CONCLUSION The chest X-ray AI met prespecified performance thresholds to move into clinical validation.
Collapse
Affiliation(s)
- Matthew S Brown
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024.
| | - Koon-Pong Wong
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Liza Shrestha
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Muhammad Wahi-Anwar
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Morgan Daly
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - George Foster
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Fereidoun Abtin
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Kathleen L Ruchalski
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Jonathan G Goldin
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| | - Dieter Enzmann
- Department of Radiological Sciences, Center for Computer Vision and Imaging Biomarkers, David Geffen School of Medicine at UCLA, 924 Westwood Blvd., Suite 615, Los Angeles, CA 90024
| |
Collapse
|
5
|
Position Classification of the Endotracheal Tube with Automatic Segmentation of the Trachea and the Tube on Plain Chest Radiography Using Deep Convolutional Neural Network. J Pers Med 2022; 12:jpm12091363. [PMID: 36143148 PMCID: PMC9503144 DOI: 10.3390/jpm12091363] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 08/20/2022] [Accepted: 08/22/2022] [Indexed: 11/16/2022] Open
Abstract
Background: This study aimed to develop an algorithm for multilabel classification according to the distance from carina to endotracheal tube (ETT) tip (absence, shallow > 70 mm, 30 mm ≤ proper ≤ 70 mm, and deep position < 30 mm) with the application of automatic segmentation of the trachea and the ETT on chest radiographs using deep convolutional neural network (CNN). Methods: This study was a retrospective study using plain chest radiographs. We segmented the trachea and the ETT on images and labeled the classification of the ETT position. We proposed models for the classification of the ETT position using EfficientNet B0 with the application of automatic segmentation using Mask R-CNN and ResNet50. Primary outcomes were favorable performance for automatic segmentation and four-label classification through five-fold validation with segmented images and a test with non-segmented images. Results: Of 1985 images, 596 images were manually segmented and consisted of 298 absence, 97 shallow, 100 proper, and 101 deep images according to the ETT position. In five-fold validations with segmented images, Dice coefficients [mean (SD)] between segmented and predicted masks were 0.841 (0.063) for the trachea and 0.893 (0.078) for the ETT, and the accuracy for four-label classification was 0.945 (0.017). In the test for classification with 1389 non-segmented images, overall values were 0.922 for accuracy, 0.843 for precision, 0.843 for sensitivity, 0.922 for specificity, and 0.843 for F1-score. Conclusions: Automatic segmentation of the ETT and trachea images and classification of the ETT position using deep CNN with plain chest radiographs could achieve good performance and improve the physician’s performance in deciding the appropriateness of ETT depth.
Collapse
|
6
|
Detecting Endotracheal Tube and Carina on Portable Supine Chest Radiographs Using One-Stage Detector with a Coarse-to-Fine Attention. Diagnostics (Basel) 2022; 12:diagnostics12081913. [PMID: 36010263 PMCID: PMC9406505 DOI: 10.3390/diagnostics12081913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 08/04/2022] [Accepted: 08/05/2022] [Indexed: 11/19/2022] Open
Abstract
In intensive care units (ICUs), after endotracheal intubation, the position of the endotracheal tube (ETT) should be checked to avoid complications. The malposition can be detected by the distance between the ETT tip and the Carina (ETT–Carina distance). However, it struggles with a limited performance for two major problems, i.e., occlusion by external machine, and the posture and machine of taking chest radiographs. While previous studies addressed these problems, they always suffered from the requirements of manual intervention. Therefore, the purpose of this paper is to locate the ETT tip and the Carina more accurately for detecting the malposition without manual intervention. The proposed architecture is composed of FCOS: Fully Convolutional One-Stage Object Detection, an attention mechanism named Coarse-to-Fine Attention (CTFA), and a segmentation branch. Moreover, a post-process algorithm is adopted to select the final location of the ETT tip and the Carina. Three metrics were used to evaluate the performance of the proposed method. With the dataset provided by National Cheng Kung University Hospital, the accuracy of the malposition detected by the proposed method achieves 88.82% and the ETT–Carina distance errors are less than 5.333±6.240 mm.
Collapse
|
7
|
Schultheis WG, Lakhani P. Using Deep Learning Segmentation for Endotracheal Tube Position Assessment. J Thorac Imaging 2022; 37:125-131. [PMID: 34292275 DOI: 10.1097/rti.0000000000000608] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE The purpose of this study was to determine the efficacy of using deep learning segmentation for endotracheal tube (ETT) position on frontal chest x-rays (CXRs). MATERIALS AND METHODS This was a retrospective trial involving 936 deidentified frontal CXRs divided into sets for training (676), validation (50), and 2 for testing (210). This included an "internal test" set of 100 CXRs from the same institution, and an "external test" set of 110 CXRs from a different institution. Each image was labeled by 2 radiologists with the ETT-carina distance. On the training images, 1 radiologist manually segmented the ETT tip and inferior wall of the carina. A U-NET architecture was constructed to label each pixel of the CXR as belonging to either the ETT, carina, or neither. This labeling allowed the distance between the ETT and carina to be compared with the average of 2 radiologists. The interclass correlation coefficients, mean, and SDs of the absolute differences between the U-NET and radiologists were calculated. RESULTS The mean absolute differences between the U-NET and average of radiologist measurements were 0.60±0.61 and 0.48±0.47 cm on the internal and external datasets, respectively. The interclass correlation coefficients were 0.87 (0.82, 0.91) and 0.92 (0.88, 0.94) on the internal and external datasets, respectively. CONCLUSION The U-NET model had excellent reliability and performance similar to radiologists in assessing ETT-carina distance.
Collapse
Affiliation(s)
| | - Paras Lakhani
- Sidney Kimmel Medical College, Thomas Jefferson University
- Department of Radiology, Thomas Jefferson University Hospital, Sidney Kimmel Jefferson Medical College, Philadelphia, PA
| |
Collapse
|
8
|
Current and emerging artificial intelligence applications in chest imaging: a pediatric perspective. Pediatr Radiol 2022; 52:2120-2130. [PMID: 34471961 PMCID: PMC8409695 DOI: 10.1007/s00247-021-05146-0] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 05/22/2021] [Accepted: 06/28/2021] [Indexed: 12/19/2022]
Abstract
Artificial intelligence (AI) applications for chest radiography and chest CT are among the most developed applications in radiology. More than 40 certified AI products are available for chest radiography or chest CT. These AI products cover a wide range of abnormalities, including pneumonia, pneumothorax and lung cancer. Most applications are aimed at detecting disease, complemented by products that characterize or quantify tissue. At present, none of the thoracic AI products is specifically designed for the pediatric population. However, some products developed to detect tuberculosis in adults are also applicable to children. Software is under development to detect early changes of cystic fibrosis on chest CT, which could be an interesting application for pediatric radiology. In this review, we give an overview of current AI products in thoracic radiology and cover recent literature about AI in chest radiography, with a focus on pediatric radiology. We also discuss possible pediatric applications.
Collapse
|
9
|
Zhou YJ, Xie XL, Zhou XH, Liu SQ, Bian GB, Hou ZG. A Real-Time Multifunctional Framework for Guidewire Morphological and Positional Analysis in Interventional X-Ray Fluoroscopy. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2020.3023952] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
10
|
Henderson RDE, Yi X, Adams SJ, Babyn P. Automatic Detection and Classification of Multiple Catheters in Neonatal Radiographs with Deep Learning. J Digit Imaging 2021; 34:888-897. [PMID: 34173089 DOI: 10.1007/s10278-021-00473-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Revised: 06/01/2021] [Accepted: 06/09/2021] [Indexed: 12/18/2022] Open
Abstract
We develop and evaluate a deep learning algorithm to classify multiple catheters on neonatal chest and abdominal radiographs. A convolutional neural network (CNN) was trained using a dataset of 777 neonatal chest and abdominal radiographs, with a split of 81%-9%-10% for training-validation-testing, respectively. We employed ResNet-50 (a CNN), pre-trained on ImageNet. Ground truth labelling was limited to tagging each image to indicate the presence or absence of endotracheal tubes (ETTs), nasogastric tubes (NGTs), and umbilical arterial and venous catheters (UACs, UVCs). The dataset included 561 images containing two or more catheters, 167 images with only one, and 49 with none. Performance was measured with average precision (AP), calculated from the area under the precision-recall curve. On our test data, the algorithm achieved an overall AP (95% confidence interval) of 0.977 (0.679-0.999) for NGTs, 0.989 (0.751-1.000) for ETTs, 0.979 (0.873-0.997) for UACs, and 0.937 (0.785-0.984) for UVCs. Performance was similar for the set of 58 test images consisting of two or more catheters, with an AP of 0.975 (0.255-1.000) for NGTs, 0.997 (0.009-1.000) for ETTs, 0.981 (0.797-0.998) for UACs, and 0.937 (0.689-0.990) for UVCs. Our network thus achieves strong performance in the simultaneous detection of these four catheter types. Radiologists may use such an algorithm as a time-saving mechanism to automate reporting of catheters on radiographs.
Collapse
Affiliation(s)
- Robert D E Henderson
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Room 1566, Saskatoon, SK, S7N 0W8, Canada.
| | - Xin Yi
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Room 1566, Saskatoon, SK, S7N 0W8, Canada
| | - Scott J Adams
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Room 1566, Saskatoon, SK, S7N 0W8, Canada
| | - Paul Babyn
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Drive, Room 1566, Saskatoon, SK, S7N 0W8, Canada
| |
Collapse
|
11
|
Synthesize and Segment: Towards Improved Catheter Segmentation via Adversarial Augmentation. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11041638] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Automatic catheter and guidewire segmentation plays an important role in robot-assisted interventions that are guided by fluoroscopy. Existing learning based methods addressing the task of segmentation or tracking are often limited by the scarcity of annotated samples and difficulty in data collection. In the case of deep learning based methods, the demand for large amounts of labeled data further impedes successful application. We propose a synthesize and segment approach with plug in possibilities for segmentation to address this. We show that an adversarially learned image-to-image translation network can synthesize catheters in X-ray fluoroscopy enabling data augmentation in order to alleviate a low data regime. To make realistic synthesized images, we train the translation network via a perceptual loss coupled with similarity constraints. Then existing segmentation networks are used to learn accurate localization of catheters in a semi-supervised setting with the generated images. The empirical results on collected medical datasets show the value of our approach with significant improvements over existing translation baseline methods.
Collapse
|
12
|
Lakhani P, Flanders A, Gorniak R. Endotracheal Tube Position Assessment on Chest Radiographs Using Deep Learning. Radiol Artif Intell 2021; 3:e200026. [PMID: 33937852 PMCID: PMC8082365 DOI: 10.1148/ryai.2020200026] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 09/03/2020] [Accepted: 09/25/2020] [Indexed: 05/08/2023]
Abstract
PURPOSE To determine the efficacy of deep learning in assessing endotracheal tube (ETT) position on radiographs. MATERIALS AND METHODS In this retrospective study, 22 960 de-identified frontal chest radiographs from 11 153 patients (average age, 60.2 years ± 19.9 [standard deviation], 55.6% men) between 2010 and 2018 containing an ETT were placed into 12 categories, including bronchial insertion and distance from the carina at 1.0-cm intervals (0.0-0.9 cm, 1.0-1.9 cm, etc), and greater than 10 cm. Images were split into training (80%, 18 368 images), validation (10%, 2296 images), and internal test (10%, 2296 images), derived from the same institution as the training data. One hundred external test radiographs were also obtained from a different hospital. The Inception V3 deep neural network was used to predict ETT-carina distance. ETT-carina distances and intraclass correlation coefficients (ICCs) for the radiologists and artificial intelligence (AI) system were calculated on a subset of 100 random internal and 100 external test images. Sensitivity and specificity were calculated for low and high ETT position thresholds. RESULTS On the internal and external test images, respectively, the ICCs of AI and radiologists were 0.84 (95% CI: 0.78, 0.92) and 0.89 (95% CI: 0.77, 0.94); the ICCs of the radiologists were 0.93 (95% CI: 0.90, 0.95) and 0.84 (95% CI: 0.71, 0.90). The AI model was 93.9% sensitive (95% CI: 90.0, 96.7) and 97.7% specific (95% CI: 96.9, 98.3) for detecting ETT-carina distance less than 1 cm. CONCLUSION Deep learning predicted ETT-carina distance within 1 cm in most cases and showed excellent interrater agreement compared with radiologists. The model was sensitive and specific in detecting low ETT positions.© RSNA, 2020.
Collapse
|
13
|
Yu D, Zhang K, Huang L, Zhao B, Zhang X, Guo X, Li M, Gu Z, Fu G, Hu M, Ping Y, Sheng Y, Liu Z, Hu X, Zhao R. Detection of peripherally inserted central catheter (PICC) in chest X-ray images: A multi-task deep learning model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 197:105674. [PMID: 32738678 DOI: 10.1016/j.cmpb.2020.105674] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Accepted: 07/17/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Peripherally inserted central catheter (PICC) is a novel drug delivery mode which has been widely used in clinical practice. However, long-term retention and some improper actions of patients may cause some severe complications of PICC, such as the drift and prolapse of its catheter. Clinically, the postoperative care of PICC is mainly completed by nurses. However, they cannot recognize the correct position of PICC from X-ray chest images as soon as the complications happen, which may lead to improper treatment. Therefore, it is necessary to identify the position of the PICC catheter as soon as these complications occur. Here we proposed a novel multi-task deep learning framework to detect PICC automatically through X-ray images, which could help nurses to solve this problem. METHODS We collected 348 X-ray chest images from 326 patients with visible PICC. Then we proposed a multi-task deep learning framework for line segmentation and tip detection of PICC catheters simultaneously. The proposed deep learning model is composed of an extraction structure and three routes, an up-sampling route for segmentation, an RPNs route, and an RoI Pooling route for detection. We further compared the effectiveness of our model with the models previously proposed. RESULTS In the catheter segmentation task, 300 X-ray images were utilized for training the model, then 48 images were tested. In the tip detection task, 154 X-ray images were used for retraining and 20 images were used in the test. Our model achieved generally better results among several popular deep learning models previously proposed. CONCLUSIONS We proposed a multi-task deep learning model that could segment the catheter and detect the tip of PICC simultaneously from X-ray chest images. This model could help nurses to recognize the correct position of PICC, and therefore, to handle the potential complications properly.
Collapse
Affiliation(s)
- Dingding Yu
- School of Mathematical Sciences, Zhejiang University. Hangzhou, Zhejiang Province, China, 310027
| | - Kaijie Zhang
- Department of Vascular Surgery, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang Province, China, 310009; Key Laboratory of Cardiovascular Intervention and Regenerative Medicine of Zhejiang Province. Sir Run Shaw Hospital, School of Medicine, Zhejiang University. Hangzhou, Zhejiang Province, China, 310016
| | - Lingyan Huang
- Department of Radiation Oncology, Zhejiang Quhua Hospital, Quzhou, Zhejiang Province, China, 324000
| | - Bonan Zhao
- School of Mathematical Sciences, Zhejiang University. Hangzhou, Zhejiang Province, China, 310027
| | - Xiaoshan Zhang
- School of Mathematical Sciences, Zhejiang University. Hangzhou, Zhejiang Province, China, 310027
| | - Xin Guo
- Department of Vascular Surgery, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang Province, China, 310009; Bone Marrow Transplantation Center, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang Province, China, 310000
| | - Miaomiao Li
- Department of Vascular Surgery, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang Province, China, 310009; Department of Reproductive Endocrinology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang Province, China, 310019
| | - Zheng Gu
- Department of Vascular Surgery, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang Province, China, 310009
| | - Guosheng Fu
- Key Laboratory of Cardiovascular Intervention and Regenerative Medicine of Zhejiang Province. Sir Run Shaw Hospital, School of Medicine, Zhejiang University. Hangzhou, Zhejiang Province, China, 310016
| | - Minchun Hu
- Department of Radiation Oncology, Zhejiang Quhua Hospital, Quzhou, Zhejiang Province, China, 324000
| | - Yan Ping
- Department of Radiation Oncology, Zhejiang Quhua Hospital, Quzhou, Zhejiang Province, China, 324000
| | - Ye Sheng
- Department of Nursing, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang Province, China, 310009
| | - Zhenjie Liu
- Department of Vascular Surgery, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang Province, China, 310009.
| | - Xianliang Hu
- School of Mathematical Sciences, Zhejiang University. Hangzhou, Zhejiang Province, China, 310027.
| | - Ruiyi Zhao
- Department of Nursing, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang Province, China, 310009.
| |
Collapse
|
14
|
Zhou YJ, Xie XL, Zhou XH, Liu SQ, Bian GB, Hou ZG. Pyramid attention recurrent networks for real-time guidewire segmentation and tracking in intraoperative X-ray fluoroscopy. Comput Med Imaging Graph 2020; 83:101734. [PMID: 32599518 DOI: 10.1016/j.compmedimag.2020.101734] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 05/09/2020] [Accepted: 05/16/2020] [Indexed: 11/29/2022]
Abstract
In endovascular and cardiovascular surgery, real-time and accurate segmentation and tracking of interventional instruments can aid in reducing radiation exposure, contrast agent and processing time. Nevertheless, this task often comes with the challenges of the elongated deformable structures with low contrast in noisy X-ray fluoroscopy. To address these issues, a novel efficient network architecture, termed pyramid attention recurrent networks (PAR-Net), is proposed for real-time guidewire segmentation and tracking. The proposed PAR-Net contains three major modules, namely pyramid attention module, recurrent residual module and pre-trained MobileNetV2 encoder. Specifically, a hybrid loss function of both reinforced focal loss and dice loss is proposed to better address the issues of class imbalance and misclassified examples. Quantitative and qualitative evaluations on clinical intraoperative images demonstrate that the proposed approach significantly outperforms simpler baselines as well as the best previously published result for this task, achieving the state-of-the-art performance.
Collapse
Affiliation(s)
- Yan-Jie Zhou
- State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Xiao-Liang Xie
- State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Xiao-Hu Zhou
- State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
| | - Shi-Qi Liu
- State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
| | - Gui-Bin Bian
- State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
| | - Zeng-Guang Hou
- State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; CAS Center for Excellence in Brain Science and Intelligence Technology, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
| |
Collapse
|
15
|
Yi X, Adams S, Babyn P, Elnajmi A. Automatic Catheter and Tube Detection in Pediatric X-ray Images Using a Scale-Recurrent Network and Synthetic Data. J Digit Imaging 2020; 33:181-190. [PMID: 30972586 PMCID: PMC7064683 DOI: 10.1007/s10278-019-00201-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022] Open
Abstract
Catheters are commonly inserted life supporting devices. Because serious complications can arise from malpositioned catheters, X-ray images are used to assess the position of a catheter immediately after placement. Previous computer vision approaches to detect catheters on X-ray images were either rule-based or only capable of processing a limited number or type of catheters projecting over the chest. With the resurgence of deep learning, supervised training approaches are beginning to show promising results. However, dense annotation maps are required, and the work of a human annotator is difficult to scale. In this work, we propose an automatic approach for detection of catheters and tubes on pediatric X-ray images. We propose a simple way of synthesizing catheters on X-ray images to generate a training dataset by exploiting the fact that catheters are essentially tubular structures with various cross sectional profiles. Further, we develop a UNet-style segmentation network with a recurrent module that can process inputs at multiple scales and iteratively refine the detection result. By training on adult chest X-rays, the proposed network exhibits promising detection results on pediatric chest/abdomen X-rays in terms of both precision and recall, with Fβ = 0.8. The approach described in this work may contribute to the development of clinical systems to detect and assess the placement of catheters on X-ray images. This may provide a solution to triage and prioritize X-ray images with potentially malpositioned catheters for a radiologist's urgent review and help automate radiology reporting.
Collapse
Affiliation(s)
- X Yi
- College of Medicine, University of Saskatchewan, Saskatoon, SK, Canada.
| | - Scott Adams
- College of Medicine, University of Saskatchewan, Saskatoon, SK, Canada
| | - Paul Babyn
- College of Medicine, University of Saskatchewan, Saskatoon, SK, Canada
| | - Abdul Elnajmi
- College of Medicine, University of Saskatchewan, Saskatoon, SK, Canada
| |
Collapse
|
16
|
Yi X, Adams SJ, Henderson RDE, Babyn P. Computer-aided Assessment of Catheters and Tubes on Radiographs: How Good Is Artificial Intelligence for Assessment? Radiol Artif Intell 2020; 2:e190082. [PMID: 33937813 DOI: 10.1148/ryai.2020190082] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Revised: 10/11/2019] [Accepted: 10/31/2019] [Indexed: 12/23/2022]
Abstract
Catheters are the second most common abnormal finding on radiographs. The position of catheters must be assessed on all radiographs because serious complications can arise if catheters are malpositioned. However, due to the large number of radiographs obtained each day, there can be substantial delays between the time a radiograph is obtained and when it is interpreted by a radiologist. Computer-aided approaches hold the potential to assist in prioritizing radiographs with potentially malpositioned catheters for interpretation and automatically insert text indicating the placement of catheters in radiology reports, thereby improving radiologists' efficiency. After 50 years of research in computer-aided diagnosis, there is still a paucity of study in this area. With the development of deep learning approaches, the problem of catheter assessment is far more solvable. This review provides an overview of current algorithms and identifies key challenges in building a reliable computer-aided diagnosis system for assessment of catheters on radiographs. This review may serve to further the development of machine learning approaches for this important use case. Supplemental material is available for this article. © RSNA, 2020.
Collapse
Affiliation(s)
- Xin Yi
- Department of Medical Imaging (X.Y., S.J.A., P.B.) and College of Medicine (R.D.E.H.), University of Saskatchewan, 103 Hospital Drive, Saskatoon, SK, Canada S7N 0W8
| | - Scott J Adams
- Department of Medical Imaging (X.Y., S.J.A., P.B.) and College of Medicine (R.D.E.H.), University of Saskatchewan, 103 Hospital Drive, Saskatoon, SK, Canada S7N 0W8
| | - Robert D E Henderson
- Department of Medical Imaging (X.Y., S.J.A., P.B.) and College of Medicine (R.D.E.H.), University of Saskatchewan, 103 Hospital Drive, Saskatoon, SK, Canada S7N 0W8
| | - Paul Babyn
- Department of Medical Imaging (X.Y., S.J.A., P.B.) and College of Medicine (R.D.E.H.), University of Saskatchewan, 103 Hospital Drive, Saskatoon, SK, Canada S7N 0W8
| |
Collapse
|
17
|
van Beek EJR, Murchison JT. Artificial Intelligence and Computer-Assisted Evaluation of Chest Pathology. Artif Intell Med Imaging 2019. [DOI: 10.1007/978-3-319-94878-2_12] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
|
18
|
Lakhani P. Deep Convolutional Neural Networks for Endotracheal Tube Position and X-ray Image Classification: Challenges and Opportunities. J Digit Imaging 2018; 30:460-468. [PMID: 28600640 PMCID: PMC5537094 DOI: 10.1007/s10278-017-9980-7] [Citation(s) in RCA: 59] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
The goal of this study is to evaluate the efficacy of deep convolutional neural networks (DCNNs) in differentiating subtle, intermediate, and more obvious image differences in radiography. Three different datasets were created, which included presence/absence of the endotracheal (ET) tube (n = 300), low/normal position of the ET tube (n = 300), and chest/abdominal radiographs (n = 120). The datasets were split into training, validation, and test. Both untrained and pre-trained deep neural networks were employed, including AlexNet and GoogLeNet classifiers, using the Caffe framework. Data augmentation was performed for the presence/absence and low/normal ET tube datasets. Receiver operating characteristic (ROC), area under the curves (AUC), and 95% confidence intervals were calculated. Statistical differences of the AUCs were determined using a non-parametric approach. The pre-trained AlexNet and GoogLeNet classifiers had perfect accuracy (AUC 1.00) in differentiating chest vs. abdominal radiographs, using only 45 training cases. For more difficult datasets, including the presence/absence and low/normal position endotracheal tubes, more training cases, pre-trained networks, and data-augmentation approaches were helpful to increase accuracy. The best-performing network for classifying presence vs. absence of an ET tube was still very accurate with an AUC of 0.99. However, for the most difficult dataset, such as low vs. normal position of the endotracheal tube, DCNNs did not perform as well, but achieved a reasonable AUC of 0.81.
Collapse
Affiliation(s)
- Paras Lakhani
- Thomas Jefferson University Hospital, Sidney Kimmel Jefferson Medical College, Philadelphia, PA, 19107, USA.
| |
Collapse
|
19
|
Chen S, Zhang M, Yao L, Xu W. Endotracheal tubes positioning detection in adult portable chest radiography for intensive care unit. Int J Comput Assist Radiol Surg 2016; 11:2049-2057. [PMID: 27299346 DOI: 10.1007/s11548-016-1430-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2015] [Accepted: 05/27/2016] [Indexed: 10/21/2022]
Abstract
PURPOSE To present an automated method for detecting endotracheal (ET) tubes and marking their tips in portable chest radiography (CXR) for intensive care units (ICUs). METHODS In this method, the lung region is first estimated and then the spine is detected between the right lung and the left lung. Because medical tubes are inserted into the body through the throat, the region of interest (ROI) is obtained across the spine. A seed point is determined in the cervical region of the ROI, and then the line path is selected from the seed point. In order to detect ET tubes, the ICU CXR image is preprocessed by contrast-limited adaptive histogram equalization. Then, a feature-based threshold method is applied to the line path to determine the tip location. A comparison to the method by use of Hough transform is also presented. The distance (error) between the detected locations and the locations annotated by a radiologist is used to evaluate the detection precision for the tip location. RESULTS The proposed method is evaluated using 44 images with ET tubes and 43 images without ET tubes. The discriminant performance for detecting the existence of ET tubes in this study was 95 %, and the average of detection error for the tip location was approximately 2.5 mm. CONCLUSIONS The proposed method could be useful for detecting malpositioned ET tubes in ICU CXRs.
Collapse
Affiliation(s)
- Sheng Chen
- School of Optical Electrical and Computer Engineering & Engineering Research, Center of Optical Instrument and System, Ministry of Education, University of Shanghai for Science and Technology, Shanghai, China.
| | - Min Zhang
- School of Optical Electrical and Computer Engineering & Engineering Research, Center of Optical Instrument and System, Ministry of Education, University of Shanghai for Science and Technology, Shanghai, China
| | - Liping Yao
- Xinhua Hospital, School of Biomedical Engineering, Shanghai Jiaotong University, Shanghai, China.
| | - Wentao Xu
- School of Optical Electrical and Computer Engineering & Engineering Research, Center of Optical Instrument and System, Ministry of Education, University of Shanghai for Science and Technology, Shanghai, China
| |
Collapse
|
20
|
Tsai TT, Lee SH, Niu CC, Lai PL, Chen LH, Chen WJ. Unplanned revision spinal surgery within a week: a retrospective analysis of surgical causes. BMC Musculoskelet Disord 2016; 17:28. [PMID: 26772974 PMCID: PMC4714439 DOI: 10.1186/s12891-016-0891-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/20/2015] [Accepted: 01/13/2016] [Indexed: 12/17/2022] Open
Abstract
Background The need for revision surgery after a spinal surgery can cause a variety of problems, including reduced quality of life for the patient, additional medical expenses, and patient-physician conflicts. The purpose of this study was to evaluate the causes of unplanned revision spinal surgery within a week after the initial surgery in order to identify the surgical issues most commonly associated with unplanned revision surgery. Methods We retrospectively reviewed the medical records of all patients at who received a spinal surgery at a regional medical center from July 2004 to April 2011 in order to identify those who required a revision surgery within one week of their initial surgery. Patients were excluded if they received a vertebroplasty, kyphoplasty, or nerve block surgery, because those surgeries are one-day surgeries that do not require hospital admission. In addition, patients with a primary diagnosis of wound infection were also excluded since reoperations for infection control can be expected. Results The overall incidence of unplanned revision spinal surgery during the time period covered by this review was 1.12 % (116/10,350 patients). The most common surgical causes of reoperation were screw malposition (41 patients), symptomatic epidural hematoma (27 patients), and inadequate decompression (37 patients). Screw malposition was the most common complication, with an incidence rate of 0.82 %. Screw instrumentation was significantly associated with revision surgery (p = 0.023), which suggests that this procedure carried a greater risk of requiring revision. The mean time interval to reoperation for epidural hematomas was significantly shorter than the intervals for other causes of revision spinal surgery (p < 0.001), which suggests that epidural hematoma was more emergent than other complications. Also, 25.93 % of patients who underwent hematoma removal experienced residual sequelae; this percentage was significantly higher than for other surgical causes of revision spinal surgery (p = 0.013). Conclusions The results suggest that to avoid the need for reoperation, screw malposition, inadequate decompression, and epidural hematoma are the key surgical complications to be guarded against. Accordingly, adequate decompression, epidural hematoma prevention, and proper pedicle screw placement may help reduce the incidence of revision surgery.
Collapse
Affiliation(s)
- Tsung-Ting Tsai
- Department of Orthopaedic Surgery, Chang Gung Memorial Hospital, No. 5, Fusing St., Gueishan, Taoyuan 333, Linkou, Taiwan. .,Chang Gung University, College of Medicine, Taoyuan, Taiwan. .,Musculoskeletal Research Center, Chang Gung Memorial Hospital, Linkou, Taiwan.
| | - Sheng-Hsun Lee
- Department of Orthopaedic Surgery, Chang Gung Memorial Hospital, No. 5, Fusing St., Gueishan, Taoyuan 333, Linkou, Taiwan.,Chang Gung University, College of Medicine, Taoyuan, Taiwan.,Musculoskeletal Research Center, Chang Gung Memorial Hospital, Linkou, Taiwan
| | - Chi-Chien Niu
- Department of Orthopaedic Surgery, Chang Gung Memorial Hospital, No. 5, Fusing St., Gueishan, Taoyuan 333, Linkou, Taiwan.,Chang Gung University, College of Medicine, Taoyuan, Taiwan.,Musculoskeletal Research Center, Chang Gung Memorial Hospital, Linkou, Taiwan
| | - Po-Liang Lai
- Department of Orthopaedic Surgery, Chang Gung Memorial Hospital, No. 5, Fusing St., Gueishan, Taoyuan 333, Linkou, Taiwan.,Chang Gung University, College of Medicine, Taoyuan, Taiwan.,Musculoskeletal Research Center, Chang Gung Memorial Hospital, Linkou, Taiwan
| | - Lih-Huei Chen
- Department of Orthopaedic Surgery, Chang Gung Memorial Hospital, No. 5, Fusing St., Gueishan, Taoyuan 333, Linkou, Taiwan.,Chang Gung University, College of Medicine, Taoyuan, Taiwan.,Musculoskeletal Research Center, Chang Gung Memorial Hospital, Linkou, Taiwan
| | - Wen-Jer Chen
- Department of Orthopaedic Surgery, Chang Gung Memorial Hospital, No. 5, Fusing St., Gueishan, Taoyuan 333, Linkou, Taiwan.,Chang Gung University, College of Medicine, Taoyuan, Taiwan.,Musculoskeletal Research Center, Chang Gung Memorial Hospital, Linkou, Taiwan
| |
Collapse
|