1
|
Gillaspie EA. Imaging of the Diaphragm: A Primer. Thorac Surg Clin 2024; 34:119-125. [PMID: 38705659 DOI: 10.1016/j.thorsurg.2024.02.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/07/2024]
Abstract
The diaphragm is a critical musculotendinous structure that contributes to respiratory function. Disorders of the diaphragm are rare and diagnostically challenging. Herein, the author reviews the radiologic options for the assessment of the diaphragm.
Collapse
Affiliation(s)
- Erin A Gillaspie
- Division of Thoracic Surgery, Creighton University Medical Center, 7500 Mercy Boulevard, Omaha, NE 68124, USA.
| |
Collapse
|
2
|
Tang CHM, Seah JCY, Ahmad HK, Milne MR, Wardman JB, Buchlak QD, Esmaili N, Lambert JF, Jones CM. Analysis of Line and Tube Detection Performance of a Chest X-ray Deep Learning Model to Evaluate Hidden Stratification. Diagnostics (Basel) 2023; 13:2317. [PMID: 37510062 PMCID: PMC10378683 DOI: 10.3390/diagnostics13142317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 07/05/2023] [Accepted: 07/07/2023] [Indexed: 07/30/2023] Open
Abstract
This retrospective case-control study evaluated the diagnostic performance of a commercially available chest radiography deep convolutional neural network (DCNN) in identifying the presence and position of central venous catheters, enteric tubes, and endotracheal tubes, in addition to a subgroup analysis of different types of lines/tubes. A held-out test dataset of 2568 studies was sourced from community radiology clinics and hospitals in Australia and the USA, and was then ground-truth labelled for the presence, position, and type of line or tube from the consensus of a thoracic specialist radiologist and an intensive care clinician. DCNN model performance for identifying and assessing the positioning of central venous catheters, enteric tubes, and endotracheal tubes over the entire dataset, as well as within each subgroup, was evaluated. The area under the receiver operating characteristic curve (AUC) was assessed. The DCNN algorithm displayed high performance in detecting the presence of lines and tubes in the test dataset with AUCs > 0.99, and good position classification performance over a subpopulation of ground truth positive cases with AUCs of 0.86-0.91. The subgroup analysis showed that model performance was robust across the various subtypes of lines or tubes, although position classification performance of peripherally inserted central catheters was relatively lower. Our findings indicated that the DCNN algorithm performed well in the detection and position classification of lines and tubes, supporting its use as an assistant for clinicians. Further work is required to evaluate performance in rarer scenarios, as well as in less common subgroups.
Collapse
Affiliation(s)
- Cyril H M Tang
- Annalise.ai, Sydney, NSW 2000, Australia
- Intensive Care Unit, Gosford Hospital, Sydney, NSW 2250, Australia
| | - Jarrel C Y Seah
- Annalise.ai, Sydney, NSW 2000, Australia
- Department of Radiology, Alfred Health, Melbourne, VIC 3004, Australia
| | | | | | | | - Quinlan D Buchlak
- Annalise.ai, Sydney, NSW 2000, Australia
- School of Medicine, The University of Notre Dame Australia, Sydney, NSW 2007, Australia
- Department of Neurosurgery, Monash Health, Melbourne, VIC 3168, Australia
| | - Nazanin Esmaili
- School of Medicine, The University of Notre Dame Australia, Sydney, NSW 2007, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia
| | | | - Catherine M Jones
- Annalise.ai, Sydney, NSW 2000, Australia
- I-MED Radiology Network, Brisbane, QLD 4006, Australia
- School of Public and Preventive Health, Monash University, Clayton, VIC 3800, Australia
- Department of Clinical Imaging Science, University of Sydney, Sydney, NSW 2006, Australia
| |
Collapse
|
3
|
Fanni SC, Marcucci A, Volpi F, Valentino S, Neri E, Romei C. Artificial Intelligence-Based Software with CE Mark for Chest X-ray Interpretation: Opportunities and Challenges. Diagnostics (Basel) 2023; 13:2020. [PMID: 37370915 DOI: 10.3390/diagnostics13122020] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 04/26/2023] [Accepted: 06/08/2023] [Indexed: 06/29/2023] Open
Abstract
Chest X-ray (CXR) is the most important technique for performing chest imaging, despite its well-known limitations in terms of scope and sensitivity. These intrinsic limitations of CXR have prompted the development of several artificial intelligence (AI)-based software packages dedicated to CXR interpretation. The online database "AI for radiology" was queried to identify CE-marked AI-based software available for CXR interpretation. The returned studies were divided according to the targeted disease. AI-powered computer-aided detection software is already widely adopted in screening and triage for pulmonary tuberculosis, especially in countries with few resources and suffering from high a burden of this disease. AI-based software has also been demonstrated to be valuable for the detection of lung nodules detection, automated flagging of positive cases, and post-processing through the development of digital bone suppression software able to produce digital bone suppressed images. Finally, the majority of available CE-marked software packages for CXR are designed to recognize several findings, with potential differences in sensitivity and specificity for each of the recognized findings.
Collapse
Affiliation(s)
- Salvatore Claudio Fanni
- Department of Translational Research, Academic Radiology, University of Pisa, 56126 Pisa, Italy
| | - Alessandro Marcucci
- Department of Translational Research, Academic Radiology, University of Pisa, 56126 Pisa, Italy
| | - Federica Volpi
- Department of Translational Research, Academic Radiology, University of Pisa, 56126 Pisa, Italy
| | | | - Emanuele Neri
- Department of Translational Research, Academic Radiology, University of Pisa, 56126 Pisa, Italy
| | - Chiara Romei
- Department of Diagnostic Imaging, 2nd Radiology Unit, Pisa University-Hospital, Via Paradisa 2, 56124 Pisa, Italy
| |
Collapse
|
4
|
Ahmad HK, Milne MR, Buchlak QD, Ektas N, Sanderson G, Chamtie H, Karunasena S, Chiang J, Holt X, Tang CHM, Seah JCY, Bottrell G, Esmaili N, Brotchie P, Jones C. Machine Learning Augmented Interpretation of Chest X-rays: A Systematic Review. Diagnostics (Basel) 2023; 13:diagnostics13040743. [PMID: 36832231 PMCID: PMC9955112 DOI: 10.3390/diagnostics13040743] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 02/13/2023] [Accepted: 02/14/2023] [Indexed: 02/18/2023] Open
Abstract
Limitations of the chest X-ray (CXR) have resulted in attempts to create machine learning systems to assist clinicians and improve interpretation accuracy. An understanding of the capabilities and limitations of modern machine learning systems is necessary for clinicians as these tools begin to permeate practice. This systematic review aimed to provide an overview of machine learning applications designed to facilitate CXR interpretation. A systematic search strategy was executed to identify research into machine learning algorithms capable of detecting >2 radiographic findings on CXRs published between January 2020 and September 2022. Model details and study characteristics, including risk of bias and quality, were summarized. Initially, 2248 articles were retrieved, with 46 included in the final review. Published models demonstrated strong standalone performance and were typically as accurate, or more accurate, than radiologists or non-radiologist clinicians. Multiple studies demonstrated an improvement in the clinical finding classification performance of clinicians when models acted as a diagnostic assistance device. Device performance was compared with that of clinicians in 30% of studies, while effects on clinical perception and diagnosis were evaluated in 19%. Only one study was prospectively run. On average, 128,662 images were used to train and validate models. Most classified less than eight clinical findings, while the three most comprehensive models classified 54, 72, and 124 findings. This review suggests that machine learning devices designed to facilitate CXR interpretation perform strongly, improve the detection performance of clinicians, and improve the efficiency of radiology workflow. Several limitations were identified, and clinician involvement and expertise will be key to driving the safe implementation of quality CXR machine learning systems.
Collapse
Affiliation(s)
- Hassan K. Ahmad
- Annalise.ai, Sydney, NSW 2000, Australia
- Department of Emergency Medicine, Royal North Shore Hospital, Sydney, NSW 2065, Australia
- Correspondence:
| | | | - Quinlan D. Buchlak
- Annalise.ai, Sydney, NSW 2000, Australia
- School of Medicine, University of Notre Dame Australia, Sydney, NSW 2007, Australia
- Department of Neurosurgery, Monash Health, Melbourne, VIC 3168, Australia
| | | | | | | | | | - Jason Chiang
- Annalise.ai, Sydney, NSW 2000, Australia
- Department of General Practice, University of Melbourne, Melbourne, VIC 3010, Australia
- Westmead Applied Research Centre, University of Sydney, Sydney, NSW 2006, Australia
| | | | | | - Jarrel C. Y. Seah
- Annalise.ai, Sydney, NSW 2000, Australia
- Department of Radiology, Alfred Health, Melbourne, VIC 3004, Australia
| | | | - Nazanin Esmaili
- School of Medicine, University of Notre Dame Australia, Sydney, NSW 2007, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia
| | - Peter Brotchie
- Annalise.ai, Sydney, NSW 2000, Australia
- Department of Radiology, St Vincent’s Health Australia, Melbourne, VIC 3065, Australia
| | - Catherine Jones
- Annalise.ai, Sydney, NSW 2000, Australia
- I-MED Radiology Network, Brisbane, QLD 4006, Australia
- School of Public and Preventive Health, Monash University, Clayton, VIC 3800, Australia
- Department of Clinical Imaging Science, University of Sydney, Sydney, NSW 2006, Australia
| |
Collapse
|
5
|
de Margerie-Mellon C, Chassagnon G. Artificial intelligence: A critical review of applications for lung nodule and lung cancer. Diagn Interv Imaging 2023; 104:11-17. [PMID: 36513593 DOI: 10.1016/j.diii.2022.11.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 11/22/2022] [Indexed: 12/14/2022]
Abstract
Artificial intelligence (AI) is a broad concept that usually refers to computer programs that can learn from data and perform certain specific tasks. In the recent years, the growth of deep learning, a successful technique for computer vision tasks that does not require explicit programming, coupled with the availability of large imaging databases fostered the development of multiple applications in the medical imaging field, especially for lung nodules and lung cancer, mostly through convolutional neural networks (CNN). Some of the first applications of AI is this field were dedicated to automated detection of lung nodules on X-ray and computed tomography (CT) examinations, with performances now reaching or exceeding those of radiologists. For lung nodule segmentation, CNN-based algorithms applied to CT images show excellent spatial overlap index with manual segmentation, even for irregular and ground glass nodules. A third application of AI is the classification of lung nodules between malignant and benign, which could limit the number of follow-up CT examinations for less suspicious lesions. Several algorithms have demonstrated excellent capabilities for the prediction of the malignancy risk when a nodule is discovered. These different applications of AI for lung nodules are particularly appealing in the context of lung cancer screening. In the field of lung cancer, AI tools applied to lung imaging have been investigated for distinct aims. First, they could play a role for the non-invasive characterization of tumors, especially for histological subtype and somatic mutation predictions, with a potential therapeutic impact. Additionally, they could help predict the patient prognosis, in combination to clinical data. Despite these encouraging perspectives, clinical implementation of AI tools is only beginning because of the lack of generalizability of published studies, of an inner obscure working and because of limited data about the impact of such tools on the radiologists' decision and on the patient outcome. Radiologists must be active participants in the process of evaluating AI tools, as such tools could support their daily work and offer them more time for high added value tasks.
Collapse
Affiliation(s)
- Constance de Margerie-Mellon
- Université Paris Cité, Laboratory of Imaging Biomarkers, Center for Research on Inflammation, UMR 1149, INSERM, 75018 Paris, France; Department of Radiology, Hôpital Saint-Louis APHP, 75010 Paris, France
| | - Guillaume Chassagnon
- Université Paris Cité, Faculté de Médecine, 75006 Paris, France; Department of Radiology, Hôpital Cochin APHP, 75014 Paris, France
| |
Collapse
|
6
|
Mathew R, Palatinus S, Padala S, Alshehri A, Awadh W, Bhandi S, Thomas J, Patil S. Neural networks for classification of cervical vertebrae maturation: a systematic review. Angle Orthod 2022; 92:796-804. [PMID: 36069934 PMCID: PMC9598845 DOI: 10.2319/031022-210.1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 06/01/2022] [Indexed: 11/23/2022] Open
Abstract
OBJECTIVE To assess the accuracy of identification and/or classification of the stage of cervical vertebrae maturity on lateral cephalograms by neural networks as compared with the ground truth determined by human observers. MATERIALS AND METHODS Search results from four electronic databases (PubMed [MEDLINE], Embase, Scopus, and Web of Science) were screened by two independent reviewers, and potentially relevant articles were chosen for full-text evaluation. Articles that fulfilled the inclusion criteria were selected for data extraction and methodologic assessment by the QUADAS-2 tool. RESULTS The search identified 425 articles across the databases, from which 8 were selected for inclusion. Most publications concerned the development of the models with different input features. Performance of the systems was evaluated against the classifications performed by human observers. The accuracy of the models on the test data ranged from 50% to more than 90%. There were concerns in all studies regarding the risk of bias in the index test and the reference standards. Studies that compared models with other algorithms in machine learning showed better results using neural networks. CONCLUSIONS Neural networks can detect and classify cervical vertebrae maturation stages on lateral cephalograms. However, further studies need to develop robust models using appropriate reference standards that can be generalized to external data.
Collapse
|