1
|
Yao J, Chu LC, Patlas M. Applications of Artificial Intelligence in Acute Abdominal Imaging. Can Assoc Radiol J 2024:8465371241250197. [PMID: 38715249 DOI: 10.1177/08465371241250197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2024] Open
Abstract
Artificial intelligence (AI) is a rapidly growing field with significant implications for radiology. Acute abdominal pain is a common clinical presentation that can range from benign conditions to life-threatening emergencies. The critical nature of these situations renders emergent abdominal imaging an ideal candidate for AI applications. CT, radiographs, and ultrasound are the most common modalities for imaging evaluation of these patients. For each modality, numerous studies have assessed the performance of AI models for detecting common pathologies, such as appendicitis, bowel obstruction, and cholecystitis. The capabilities of these models range from simple classification to detailed severity assessment. This narrative review explores the evolution, trends, and challenges in AI applications for evaluating acute abdominal pathologies. We review implementations of AI for non-traumatic and traumatic abdominal pathologies, with discussion of potential clinical impact, challenges, and future directions for the technology.
Collapse
Affiliation(s)
- Jason Yao
- Department of Radiology, McMaster University, Hamilton, ON, Canada
| | - Linda C Chu
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Michael Patlas
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
2
|
Kim SW, Cheon JE, Choi YH, Hwang JY, Shin SM, Cho YJ, Lee S, Lee SB. Feasibility of a deep learning artificial intelligence model for the diagnosis of pediatric ileocolic intussusception with grayscale ultrasonography. Ultrasonography 2024; 43:57-67. [PMID: 38109893 PMCID: PMC10766885 DOI: 10.14366/usg.23153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 10/06/2023] [Accepted: 10/10/2023] [Indexed: 12/20/2023] Open
Abstract
PURPOSE This study explored the feasibility of utilizing a deep learning artificial intelligence (AI) model to detect ileocolic intussusception on grayscale ultrasound images. METHODS This retrospective observational study incorporated ultrasound images of children who underwent emergency ultrasonography for suspected ileocolic intussusception. After excluding video clips, Doppler images, and annotated images, 40,765 images from two tertiary hospitals were included (positive-to-negative ratio: hospital A, 2,775:35,373; hospital B, 140:2,477). Images from hospital A were split into a training set, a tuning set, and an internal test set (ITS) at a ratio of 7:1.5:1.5. Images from hospital B comprised an external test set (ETS). For each image indicating intussusception, two radiologists provided a bounding box as the ground-truth label. If intussusception was suspected in the input image, the model generated a bounding box with a confidence score (0-1) at the estimated lesion location. Average precision (AP) was used to evaluate overall model performance. The performance of practical thresholds for the modelgenerated confidence score, as determined from the ITS, was verified using the ETS. RESULTS The AP values for the ITS and ETS were 0.952 and 0.936, respectively. Two confidence thresholds, CTopt and CTprecision, were set at 0.557 and 0.790, respectively. For the ETS, the perimage precision and recall were 95.7% and 80.0% with CTopt, and 98.4% and 44.3% with CTprecision. For per-patient diagnosis, the sensitivity and specificity were 100.0% and 97.1% with CTopt, and 100.0% and 99.0% with CTprecision. The average number of false positives per patient was 0.04 with CTopt and 0.01 for CTprecision. CONCLUSION The feasibility of using an AI model to diagnose ileocolic intussusception on ultrasonography was demonstrated. However, further study involving bias-free data is warranted for robust clinical validation.
Collapse
Affiliation(s)
- Se Woo Kim
- Department of Radiology, Seoul National University Hospital, Seoul, Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
| | - Jung-Eun Cheon
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
- Department of Radiology, Seoul National University Children’s Hospital, Seoul, Korea
| | - Young Hun Choi
- Department of Radiology, Seoul National University Children’s Hospital, Seoul, Korea
| | - Jae-Yeon Hwang
- Department of Radiology, Pusan National University Yangsan Hospital, Yangsan, Korea
| | - Su-Mi Shin
- Department of Radiology, Seoul National University Seoul Metropolitan Government Boramae Medical Center, Seoul, Korea
| | - Yeon Jin Cho
- Department of Radiology, Seoul National University Children’s Hospital, Seoul, Korea
| | - Seunghyun Lee
- Department of Radiology, Seoul National University Children’s Hospital, Seoul, Korea
| | - Seul Bi Lee
- Department of Radiology, Seoul National University Children’s Hospital, Seoul, Korea
| |
Collapse
|
3
|
Pei Y, Wang G, Cao H, Jiang S, Wang D, Wang H, Wang H, Yu H. A deep-learning pipeline to diagnose pediatric intussusception and assess severity during ultrasound scanning: a multicenter retrospective-prospective study. NPJ Digit Med 2023; 6:182. [PMID: 37775624 PMCID: PMC10541898 DOI: 10.1038/s41746-023-00930-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Accepted: 09/14/2023] [Indexed: 10/01/2023] Open
Abstract
Ileocolic intussusception is one of the common acute abdomens in children and is first diagnosed urgently using ultrasound. Manual diagnosis requires extensive experience and skill, and identifying surgical indications in assessing the disease severity is more challenging. We aimed to develop a real-time lesion visualization deep-learning pipeline to solve this problem. This multicenter retrospective-prospective study used 14,085 images in 8736 consecutive patients (median age, eight months) with ileocolic intussusception who underwent ultrasound at six hospitals to train, validate, and test the deep-learning pipeline. Subsequently, the algorithm was validated in an internal image test set and an external video dataset. Furthermore, the performances of junior, intermediate, senior, and junior sonographers with AI-assistance were prospectively compared in 242 volunteers using the DeLong test. This tool recognized 1,086 images with three ileocolic intussusception signs with an average of the area under the receiver operating characteristic curve (average-AUC) of 0.972. It diagnosed 184 patients with no intussusception, nonsurgical intussusception, and surgical intussusception in 184 ultrasound videos with an average-AUC of 0.956. In the prospective pilot study using 242 volunteers, junior sonographers' performances were significantly improved with AI-assistance (average-AUC: 0.966 vs. 0.857, P < 0.001; median scanning-time: 9.46 min vs. 3.66 min, P < 0.001), which were comparable to those of senior sonographers (average-AUC: 0.966 vs. 0.973, P = 0.600). Thus, here, we report that the deep-learning pipeline that guides lesions in real-time and is interpretable during ultrasound scanning could assist sonographers in improving the accuracy and efficiency of diagnosing intussusception and identifying surgical indications.
Collapse
Affiliation(s)
- Yuanyuan Pei
- Provincial Key Laboratory of Research in Structure Birth Defect Disease and Department of Pediatric Surgery, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangdong Provincial Clinical Research Center for Child Health, Guangzhou, China
| | - Guijuan Wang
- School of Computer Science, South China Normal University, Guangzhou, China
| | - Haiwei Cao
- Ultrasonic Department, Kaifeng Children's Hospital, Kaifeng, China
| | - Shuanglan Jiang
- Ultrasonic Department, Dongguan Children's Hospital, Dongguan, China
| | - Dan Wang
- Ultrasonic Department, Children's Hospital Affiliated to Zhengzhou University, Zhengzhou, China
| | - Haiyu Wang
- Department of Ultrasonography, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China
| | - Hongying Wang
- Department of Ultrasonography, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China.
| | - Hongkui Yu
- Department of Ultrasonography, Guangzhou Women and Children's Medical Center, Guangzhou Medical University, Guangzhou, China.
- Department of Ultrasonography, Shenzhen Baoan Women's and Children's Hospital, Jinan University, Shenzhen, China.
| |
Collapse
|
4
|
Fanni SC, Greco G, Rossi S, Aghakhanyan G, Masala S, Scaglione M, Tonerini M, Neri E. Role of artificial intelligence in oncologic emergencies: a narrative review. EXPLORATION OF TARGETED ANTI-TUMOR THERAPY 2023; 4:344-354. [PMID: 37205309 PMCID: PMC10185441 DOI: 10.37349/etat.2023.00138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Accepted: 02/13/2023] [Indexed: 05/21/2023] Open
Abstract
Oncologic emergencies are a wide spectrum of oncologic conditions caused directly by malignancies or their treatment. Oncologic emergencies may be classified according to the underlying physiopathology in metabolic, hematologic, and structural conditions. In the latter, radiologists have a pivotal role, through an accurate diagnosis useful to provide optimal patient care. Structural conditions may involve the central nervous system, thorax, or abdomen, and emergency radiologists have to know the characteristics imaging findings of each one of them. The number of oncologic emergencies is growing due to the increased incidence of malignancies in the general population and also to the improved survival of these patients thanks to the advances in cancer treatment. Artificial intelligence (AI) could be a solution to assist emergency radiologists with this rapidly increasing workload. To our knowledge, AI applications in the setting of the oncologic emergency are mostly underexplored, probably due to the relatively low number of oncologic emergencies and the difficulty in training algorithms. However, cancer emergencies are defined by the cause and not by a specific pattern of radiological symptoms and signs. Therefore, it can be expected that AI algorithms developed for the detection of these emergencies in the non-oncological field can be transferred to the clinical setting of oncologic emergency. In this review, a craniocaudal approach was followed and central nervous system, thoracic, and abdominal oncologic emergencies have been addressed regarding the AI applications reported in literature. Among the central nervous system emergencies, AI applications have been reported for brain herniation and spinal cord compression. In the thoracic district the addressed emergencies were pulmonary embolism, cardiac tamponade and pneumothorax. Pneumothorax was the most frequently described application for AI, to improve sensibility and to reduce the time-to-diagnosis. Finally, regarding abdominal emergencies, AI applications for abdominal hemorrhage, intestinal obstruction, intestinal perforation, and intestinal intussusception have been described.
Collapse
Affiliation(s)
- Salvatore Claudio Fanni
- Department of Translational Research, Academic Radiology, University of Pisa, 56126 Pisa, Italy
| | - Giuseppe Greco
- Department of Translational Research, Academic Radiology, University of Pisa, 56126 Pisa, Italy
| | - Sara Rossi
- Department of Translational Research, Academic Radiology, University of Pisa, 56126 Pisa, Italy
| | - Gayane Aghakhanyan
- Department of Translational Research, Academic Radiology, University of Pisa, 56126 Pisa, Italy
| | - Salvatore Masala
- Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy
| | - Mariano Scaglione
- Department of Medicine, Surgery and Pharmacy, University of Sassari, 07100 Sassari, Italy
| | - Michele Tonerini
- Department of Surgical, Medical, Molecular and Critical Area Pathology, University of Pisa, 56126 Pisa, Italy
| | - Emanuele Neri
- Department of Translational Research, Academic Radiology, University of Pisa, 56126 Pisa, Italy
| |
Collapse
|
5
|
Reis HC, Turk V, Khoshelham K, Kaya S. MediNet: transfer learning approach with MediNet medical visual database. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 82:1-44. [PMID: 37362724 PMCID: PMC10025796 DOI: 10.1007/s11042-023-14831-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 04/06/2022] [Accepted: 02/06/2023] [Indexed: 06/28/2023]
Abstract
The rapid development of machine learning has increased interest in the use of deep learning methods in medical research. Deep learning in the medical field is used in disease detection and classification problems in the clinical decision-making process. Large amounts of labeled datasets are often required to train deep neural networks; however, in the medical field, the lack of a sufficient number of images in datasets and the difficulties encountered during data collection are among the main problems. In this study, we propose MediNet, a new 10-class visual dataset consisting of Rontgen (X-ray), Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound, and Histopathological images such as calcaneal normal, calcaneal tumor, colon benign colon adenocarcinoma, brain normal, brain tumor, breast benign, breast malignant, chest normal, chest pneumonia. AlexNet, VGG19-BN, Inception V3, DenseNet 121, ResNet 101, EfficientNet B0, Nested-LSTM + CNN, and proposed RdiNet deep learning algorithms are used in the transfer learning for pre-training and classification application. Transfer learning aims to apply previously learned knowledge in a new task. Seven algorithms were trained with the MediNet dataset, and the models obtained from these algorithms, namely feature vectors, were recorded. Pre-training models were used for classification studies on chest X-ray images, diabetic retinopathy, and Covid-19 datasets with the transfer learning technique. In performance measurement, an accuracy of 94.84% was obtained in the traditional classification study for the InceptionV3 model in the classification study performed on the Chest X-Ray Images dataset, and the accuracy was increased 98.71% after the transfer learning technique was applied. In the Covid-19 dataset, the classification success of the DenseNet121 model before pre-trained was 88%, while the performance after the transfer application with MediNet was 92%. In the Diabetic retinopathy dataset, the classification success of the Nested-LSTM + CNN model before pre-trained was 79.35%, while the classification success was 81.52% after the transfer application with MediNet. The comparison of results obtained from experimental studies observed that the proposed method produced more successful results. Graphical abstract
Collapse
Affiliation(s)
- Hatice Catal Reis
- Department of Geomatics Engineering, Gumushane University, 2900 Gumushane, Turkey
| | - Veysel Turk
- Department of Computer Engineering, University of Harran, Sanliurfa, Turkey
| | - Kourosh Khoshelham
- Department of Infrastructure Engineering, The University of Melbourne, Parkville, 3052 Australia
| | - Serhat Kaya
- Department of Mining Engineering, Dicle University, Diyarbakir, Turkey
| |
Collapse
|
6
|
Hwang J, Yoon HM, Kim PH, Jung AY, Lee JS, Cho YA. Current diagnosis and image-guided reduction for intussusception in children. Clin Exp Pediatr 2023; 66:12-21. [PMID: 35798026 PMCID: PMC9815940 DOI: 10.3345/cep.2021.01816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 06/02/2022] [Indexed: 02/04/2023] Open
Abstract
Intussusception involves an invagination of the proximal bowel into the distal bowel, with ileocolic intussusception being the most common type. However, a diagnostic delay can lead to intestinal ischemia, bowel infarction, or even death; therefore, its early diagnosis and management are important. The primary role of abdominal radiography is to detect pneumoperitoneum or high-grade bowel obstruction in cases of suspected intussusception, and ultrasonography is the modality of choice for its diagnosis. Nonoperative enema reduction, the treatment of choice for childhood intussusception in cases without signs of perforation or peritonitis, can be safely performed with a success rate of 82%. Enema reduction can be performed in various ways according to image guidance method (fluoroscopy or ultrasonography) and reduction medium (liquid or air). Successful enema reduction is less likely to be achieved in children with a longer symptom duration, younger age, lethargy, fever, bloody diarrhea, unfavorable radiologic findings (small bowel obstruction, trapped fluid, ascites, absence of flow in the intussusception, intussusception in the left-sided colon), and pathological lead points. This review highlights the current concepts of intussusception diagnosis, nonsurgical enema reduction, success rates, predictors of failed enema reduction, complications, and recurrence to guide general pediatricians in the management of childhood intussusception.
Collapse
Affiliation(s)
- Jisun Hwang
- Department of Radiology, Hallym University Dongtan Sacred Heart Hospital, Hwaseong, Korea
| | - Hee Mang Yoon
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Pyeong Hwa Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Ah Young Jung
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Jin Seong Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Young Ah Cho
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| |
Collapse
|
7
|
Cellina M, Cè M, Irmici G, Ascenti V, Caloro E, Bianchi L, Pellegrino G, D’Amico N, Papa S, Carrafiello G. Artificial Intelligence in Emergency Radiology: Where Are We Going? Diagnostics (Basel) 2022; 12:diagnostics12123223. [PMID: 36553230 PMCID: PMC9777804 DOI: 10.3390/diagnostics12123223] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 12/11/2022] [Accepted: 12/16/2022] [Indexed: 12/23/2022] Open
Abstract
Emergency Radiology is a unique branch of imaging, as rapidity in the diagnosis and management of different pathologies is essential to saving patients' lives. Artificial Intelligence (AI) has many potential applications in emergency radiology: firstly, image acquisition can be facilitated by reducing acquisition times through automatic positioning and minimizing artifacts with AI-based reconstruction systems to optimize image quality, even in critical patients; secondly, it enables an efficient workflow (AI algorithms integrated with RIS-PACS workflow), by analyzing the characteristics and images of patients, detecting high-priority examinations and patients with emergent critical findings. Different machine and deep learning algorithms have been trained for the automated detection of different types of emergency disorders (e.g., intracranial hemorrhage, bone fractures, pneumonia), to help radiologists to detect relevant findings. AI-based smart reporting, summarizing patients' clinical data, and analyzing the grading of the imaging abnormalities, can provide an objective indicator of the disease's severity, resulting in quick and optimized treatment planning. In this review, we provide an overview of the different AI tools available in emergency radiology, to keep radiologists up to date on the current technological evolution in this field.
Collapse
Affiliation(s)
- Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121 Milan, Italy
- Correspondence:
| | - Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Giovanni Irmici
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Velio Ascenti
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Elena Caloro
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Lorenzo Bianchi
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Giuseppe Pellegrino
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Natascha D’Amico
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano, Via Saint Bon 20, 20147 Milan, Italy
| | - Sergio Papa
- Unit of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano, Via Saint Bon 20, 20147 Milan, Italy
| | - Gianpaolo Carrafiello
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
- Radiology Department, Fondazione IRCCS Cà Granda, Policlinico di Milano Ospedale Maggiore, Via Sforza 35, 20122 Milan, Italy
| |
Collapse
|
8
|
An Extra Set of Intelligent Eyes: Application of Artificial Intelligence in Imaging of Abdominopelvic Pathologies in Emergency Radiology. Diagnostics (Basel) 2022; 12:diagnostics12061351. [PMID: 35741161 PMCID: PMC9221728 DOI: 10.3390/diagnostics12061351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 05/19/2022] [Accepted: 05/26/2022] [Indexed: 11/25/2022] Open
Abstract
Imaging in the emergent setting carries high stakes. With increased demand for dedicated on-site service, emergency radiologists face increasingly large image volumes that require rapid turnaround times. However, novel artificial intelligence (AI) algorithms may assist trauma and emergency radiologists with efficient and accurate medical image analysis, providing an opportunity to augment human decision making, including outcome prediction and treatment planning. While traditional radiology practice involves visual assessment of medical images for detection and characterization of pathologies, AI algorithms can automatically identify subtle disease states and provide quantitative characterization of disease severity based on morphologic image details, such as geometry and fluid flow. Taken together, the benefits provided by implementing AI in radiology have the potential to improve workflow efficiency, engender faster turnaround results for complex cases, and reduce heavy workloads. Although analysis of AI applications within abdominopelvic imaging has primarily focused on oncologic detection, localization, and treatment response, several promising algorithms have been developed for use in the emergency setting. This article aims to establish a general understanding of the AI algorithms used in emergent image-based tasks and to discuss the challenges associated with the implementation of AI into the clinical workflow.
Collapse
|
9
|
Kang BK, Han Y, Oh J, Lim J, Ryu J, Yoon MS, Lee J, Ryu S. Automatic Segmentation for Favourable Delineation of Ten Wrist Bones on Wrist Radiographs Using Convolutional Neural Network. J Pers Med 2022; 12:776. [PMID: 35629198 PMCID: PMC9147335 DOI: 10.3390/jpm12050776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 05/05/2022] [Accepted: 05/10/2022] [Indexed: 02/04/2023] Open
Abstract
Purpose: This study aimed to develop and validate an automatic segmentation algorithm for the boundary delineation of ten wrist bones, consisting of eight carpal and two distal forearm bones, using a convolutional neural network (CNN). Methods: We performed a retrospective study using adult wrist radiographs. We labeled the ground truth masking of wrist bones, and propose that the Fine Mask R-CNN consisted of wrist regions of interest (ROI) using a Single-Shot Multibox Detector (SSD) and segmentation via Mask R-CNN, plus the extended mask head. The primary outcome was an improvement in the prediction of delineation via the network combined with ground truth masking, and this was compared between two networks through five-fold validations. Results: In total, 702 images were labeled for the segmentation of ten wrist bones. The overall performance (mean (SD] of Dice coefficient) of the auto-segmentation of the ten wrist bones improved from 0.93 (0.01) using Mask R-CNN to 0.95 (0.01) using Fine Mask R-CNN (p < 0.001). The values of each wrist bone were higher when using the Fine Mask R-CNN than when using the alternative (all p < 0.001). The value derived for the distal radius was the highest, and that for the trapezoid was the lowest in both networks. Conclusion: Our proposed Fine Mask R-CNN model achieved good performance in the automatic segmentation of ten overlapping wrist bones derived from adult wrist radiographs.
Collapse
Affiliation(s)
- Bo-kyeong Kang
- Department of Radiology, College of Medicine, Hanyang University, Seoul 04763, Korea;
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04764, Korea; (M.S.Y.); (J.L.)
| | - Yelin Han
- Department of Computer Science, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul 04763, Korea;
| | - Jaehoon Oh
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04764, Korea; (M.S.Y.); (J.L.)
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul 04763, Korea
| | - Jongwoo Lim
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04764, Korea; (M.S.Y.); (J.L.)
- Department of Computer Science, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul 04763, Korea;
| | - Jongbin Ryu
- Department of Software and Computer Engineering, Ajou University, Suwon 16499, Korea;
- Department of Artificial Intelligence, Ajou University, Suwon 16499, Korea
| | - Myeong Seong Yoon
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04764, Korea; (M.S.Y.); (J.L.)
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul 04763, Korea
| | - Juncheol Lee
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul 04764, Korea; (M.S.Y.); (J.L.)
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul 04763, Korea
| | - Soorack Ryu
- Biostatistical Consulting and Research Lab, Medical Research Collaborating Center, Hanyang University, Seoul 04763, Korea;
| |
Collapse
|
10
|
Yu AC, Mohajer B, Eng J. External Validation of Deep Learning Algorithms for Radiologic Diagnosis: A Systematic Review. Radiol Artif Intell 2022; 4:e210064. [PMID: 35652114 DOI: 10.1148/ryai.210064] [Citation(s) in RCA: 80] [Impact Index Per Article: 40.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 03/09/2022] [Accepted: 04/12/2022] [Indexed: 01/17/2023]
Abstract
Purpose To assess generalizability of published deep learning (DL) algorithms for radiologic diagnosis. Materials and Methods In this systematic review, the PubMed database was searched for peer-reviewed studies of DL algorithms for image-based radiologic diagnosis that included external validation, published from January 1, 2015, through April 1, 2021. Studies using nonimaging features or incorporating non-DL methods for feature extraction or classification were excluded. Two reviewers independently evaluated studies for inclusion, and any discrepancies were resolved by consensus. Internal and external performance measures and pertinent study characteristics were extracted, and relationships among these data were examined using nonparametric statistics. Results Eighty-three studies reporting 86 algorithms were included. The vast majority (70 of 86, 81%) reported at least some decrease in external performance compared with internal performance, with nearly half (42 of 86, 49%) reporting at least a modest decrease (≥0.05 on the unit scale) and nearly a quarter (21 of 86, 24%) reporting a substantial decrease (≥0.10 on the unit scale). No study characteristics were found to be associated with the difference between internal and external performance. Conclusion Among published external validation studies of DL algorithms for image-based radiologic diagnosis, the vast majority demonstrated diminished algorithm performance on the external dataset, with some reporting a substantial performance decrease.Keywords: Meta-Analysis, Computer Applications-Detection/Diagnosis, Neural Networks, Computer Applications-General (Informatics), Epidemiology, Technology Assessment, Diagnosis, Informatics Supplemental material is available for this article. © RSNA, 2022.
Collapse
Affiliation(s)
- Alice C Yu
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 1800 Orleans St, Baltimore, MD 21287
| | - Bahram Mohajer
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 1800 Orleans St, Baltimore, MD 21287
| | - John Eng
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 1800 Orleans St, Baltimore, MD 21287
| |
Collapse
|
11
|
Bae J, Yu S, Oh J, Kim TH, Chung JH, Byun H, Yoon MS, Ahn C, Lee DK. External Validation of Deep Learning Algorithm for Detecting and Visualizing Femoral Neck Fracture Including Displaced and Non-displaced Fracture on Plain X-ray. J Digit Imaging 2021; 34:1099-1109. [PMID: 34379216 DOI: 10.1007/s10278-021-00499-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 06/08/2021] [Accepted: 07/15/2021] [Indexed: 11/29/2022] Open
Abstract
This study aimed to develop a method for detection of femoral neck fracture (FNF) including displaced and non-displaced fractures using convolutional neural network (CNN) with plain X-ray and to validate its use across hospitals through internal and external validation sets. This is a retrospective study using hip and pelvic anteroposterior films for training and detecting femoral neck fracture through residual neural network (ResNet) 18 with convolutional block attention module (CBAM) + + . The study was performed at two tertiary hospitals between February and May 2020 and used data from January 2005 to December 2018. Our primary outcome was favorable performance for diagnosis of femoral neck fracture from negative studies in our dataset. We described the outcomes as area under the receiver operating characteristic curve (AUC), accuracy, Youden index, sensitivity, and specificity. A total of 4,189 images that contained 1,109 positive images (332 non-displaced and 777 displaced) and 3,080 negative images were collected from two hospitals. The test values after training with one hospital dataset were 0.999 AUC, 0.986 accuracy, 0.960 Youden index, and 0.966 sensitivity, and 0.993 specificity. Values of external validation with the other hospital dataset were 0.977, 0.971, 0.920, 0.939, and 0.982, respectively. Values of merged hospital datasets were 0.987, 0.983, 0.960, 0.973, and 0.987, respectively. A CNN algorithm for FNF detection in both displaced and non-displaced fractures using plain X-rays could be used in other hospitals to screen for FNF after training with images from the hospital of interest.
Collapse
Affiliation(s)
- Junwon Bae
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul, 04763, Republic of Korea
| | - Sangjoon Yu
- Department of Computer Science, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul, 04763, Republic of Korea
| | - Jaehoon Oh
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul, 04763, Republic of Korea. .,Machine Learning Research Center for Medical Data, Hanyang University, Seoul, Republic of Korea.
| | - Tae Hyun Kim
- Department of Computer Science, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul, 04763, Republic of Korea. .,Machine Learning Research Center for Medical Data, Hanyang University, Seoul, Republic of Korea.
| | - Jae Ho Chung
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul, Republic of Korea.,Department of Otolaryngology - Head and Neck Surgery, College of Medicine, Hanyang University, Seoul, Republic of Korea.,Department of HY, College of Medicine, KIST Bio-Convergence, Hanyang University, Seoul, Republic of Korea
| | - Hayoung Byun
- Machine Learning Research Center for Medical Data, Hanyang University, Seoul, Republic of Korea.,Department of Otolaryngology - Head and Neck Surgery, College of Medicine, Hanyang University, Seoul, Republic of Korea
| | - Myeong Seong Yoon
- Department of Emergency Medicine, College of Medicine, Hanyang University, 222 Wangsimni-ro, Seongdong-gu, Seoul, 04763, Republic of Korea
| | - Chiwon Ahn
- Department of Emergency Medicine, College of Medicine, Chung-Ang University, Seoul, Republic of Korea
| | - Dong Keon Lee
- Department of Emergency Medicine, Seoul National University Bundang Hospital, Gyeonggi-do, Republic of Korea
| |
Collapse
|
12
|
Weakly-supervised progressive denoising with unpaired CT images. Med Image Anal 2021; 71:102065. [PMID: 33915472 DOI: 10.1016/j.media.2021.102065] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/16/2021] [Accepted: 03/30/2021] [Indexed: 12/12/2022]
Abstract
Although low-dose CT imaging has attracted a great interest due to its reduced radiation risk to the patients, it suffers from severe and complex noise. Recent fully-supervised methods have shown impressive performances on CT denoising task. However, they require a huge amount of paired normal-dose and low-dose CT images, which is generally unavailable in real clinical practice. To address this problem, we propose a weakly-supervised denoising framework that generates paired original and noisier CT images from unpaired CT images using a physics-based noise model. Our denoising framework also includes a progressive denoising module that bypasses the challenges of mapping from low-dose to normal-dose CT images directly via progressively compensating the small noise gap. To quantitatively evaluate diagnostic image quality, we present the noise power spectrum and signal detection accuracy, which are well correlated with the visual inspection. The experimental results demonstrate that our method achieves remarkable performances, even superior to fully-supervised CT denoising with respect to the signal detectability. Moreover, our framework increases the flexibility in data collection, allowing us to utilize any unpaired data at any dose levels.
Collapse
|