1
|
Geppert J, Asgharzadeh A, Brown A, Stinton C, Helm EJ, Jayakody S, Todkill D, Gallacher D, Ghiasvand H, Patel M, Auguste P, Tsertsvadze A, Chen YF, Grove A, Shinkins B, Clarke A, Taylor-Phillips S. Software using artificial intelligence for nodule and cancer detection in CT lung cancer screening: systematic review of test accuracy studies. Thorax 2024; 79:1040-1049. [PMID: 39322406 PMCID: PMC11503082 DOI: 10.1136/thorax-2024-221662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Accepted: 09/04/2024] [Indexed: 09/27/2024]
Abstract
OBJECTIVES To examine the accuracy and impact of artificial intelligence (AI) software assistance in lung cancer screening using CT. METHODS A systematic review of CE-marked, AI-based software for automated detection and analysis of nodules in CT lung cancer screening was conducted. Multiple databases including Medline, Embase and Cochrane CENTRAL were searched from 2012 to March 2023. Primary research reporting test accuracy or impact on reading time or clinical management was included. QUADAS-2 and QUADAS-C were used to assess risk of bias. We undertook narrative synthesis. RESULTS Eleven studies evaluating six different AI-based software and reporting on 19 770 patients were eligible. All were at high risk of bias with multiple applicability concerns. Compared with unaided reading, AI-assisted reading was faster and generally improved sensitivity (+5% to +20% for detecting/categorising actionable nodules; +3% to +15% for detecting/categorising malignant nodules), with lower specificity (-7% to -3% for correctly detecting/categorising people without actionable nodules; -8% to -6% for correctly detecting/categorising people without malignant nodules). AI assistance tended to increase the proportion of nodules allocated to higher risk categories. Assuming 0.5% cancer prevalence, these results would translate into additional 150-750 cancers detected per million people attending screening but lead to an additional 59 700 to 79 600 people attending screening without cancer receiving unnecessary CT surveillance. CONCLUSIONS AI assistance in lung cancer screening may improve sensitivity but increases the number of false-positive results and unnecessary surveillance. Future research needs to increase the specificity of AI-assisted reading and minimise risk of bias and applicability concerns through improved study design. PROSPERO REGISTRATION NUMBER CRD42021298449.
Collapse
Affiliation(s)
- Julia Geppert
- Warwick Screening & Warwick Evidence, Warwick Medical School, University of Warwick, Coventry, UK
| | - Asra Asgharzadeh
- Population Health Science, University of Bristol, Bristol, UK
- Warwick Evidence, Warwick Medical School, University of Warwick, Coventry, UK
| | - Anna Brown
- Warwick Evidence, Warwick Medical School, University of Warwick, Coventry, UK
| | - Chris Stinton
- Warwick Screening & Warwick Evidence, Warwick Medical School, University of Warwick, Coventry, UK
| | - Emma J Helm
- Department of Radiology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Surangi Jayakody
- Warwick Evidence, Warwick Medical School, University of Warwick, Coventry, UK
| | - Daniel Todkill
- Warwick Evidence, Warwick Medical School, University of Warwick, Coventry, UK
| | - Daniel Gallacher
- Warwick Evidence, Warwick Medical School, University of Warwick, Coventry, UK
| | - Hesam Ghiasvand
- Warwick Evidence, Warwick Medical School, University of Warwick, Coventry, UK
- Research Centre for Healthcare and Communities, Coventry University, Coventry, UK
| | - Mubarak Patel
- Warwick Evidence, Warwick Medical School, University of Warwick, Coventry, UK
| | - Peter Auguste
- Warwick Evidence, Warwick Medical School, University of Warwick, Coventry, UK
| | | | - Yen-Fu Chen
- Warwick Evidence, Warwick Medical School, University of Warwick, Coventry, UK
| | - Amy Grove
- Warwick Evidence, Warwick Medical School, University of Warwick, Coventry, UK
| | - Bethany Shinkins
- Warwick Screening & Warwick Evidence, Warwick Medical School, University of Warwick, Coventry, UK
| | - Aileen Clarke
- Warwick Evidence, Warwick Medical School, University of Warwick, Coventry, UK
| | | |
Collapse
|
2
|
Hardie RC, Trout AT, Dillman JR, Narayanan BN, Tanimoto AA. Performance of Lung-Nodule Computer-Aided Detection Systems on Standard-Dose and Low-Dose Pediatric CT Scans: An Intraindividual Comparison. AJR Am J Roentgenol 2024. [PMID: 39382534 DOI: 10.2214/ajr.24.31972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2024]
Abstract
Background: When applying lung-nodule computer-aided detection (CAD) systems for pediatric CT, performance may be degraded on low-dose scans due to increased image noise. Objective: To conduct an intraindividual comparison of the performance for lung nodule detection of two CAD systems trained using adult data between low-dose and standard-dose pediatric chest CT scans. Methods: This retrospective study included 73 patients (32 female, 41 male; mean age, 14.7 years; age range, 4-20 years) who underwent both clinical standard-dose and investigational low-dose chest CT examinations within the same encounter from November 30, 2018 to August 31, 2020 as part of an earlier prospective study. Fellowship-trained pediatric radiologists annotated lung nodules to serve as the reference standard. Both CT scans were processed using two publicly available lung-nodule CAD systems previously trained using adult data: FlyerScan and Medical Open Network for Artificial Intelligence (MONAI). The systems' sensitivities for nodules measuring 3-30 mm (n=247) were calculated when operating at a fixed frequency of two false-positives per scan. Results: FlyerScan exhibited detection sensitivities of 76.9% (190/247; 95% CI: 73.3-80.8%) on standard-dose scans and 66.8% (165/247; 95% CI: 62.6-71.5) on low-dose scans. MONAI exhibited detection sensitivities of 67.6% (167/247, 95% CI: 61.5-72.1) on standard-dose scans and 62.3% (154/247, 95% CI: 56.1-66.5%) on low-dose scans. The number of detected nodules for standard-dose versus low-dose scans for 3-mm nodules was 33 versus 24 (FlyerScan) and 16 versus 13 (MONAI), 4-mm nodules was 46 versus 42 (FlyerScan) and 39 versus 30 (MONAI), 5-mm nodules was 38 versus 33 (FlyerScan) and 32 versus 31 (MONAI), and 6-mm nodules was 27 versus 20 (FlyerScan) and 24 versus 24 (MONAI). For nodules measuring ≥7 mm, detection did not show a consistent pattern between standard-dose and low-dose scans for either system. Conclusions: Two lung-nodule CAD systems demonstrated decreased sensitivity on low-dose versus standard-dose pediatric CT scans performed in the same patients. The reduced detection at low dose was overall more pronounced for nodules measuring less than 5 mm. Clinical Impact: Caution is needed when using low-dose CT protocols in combination with CAD systems to help detect small lung nodules in pediatric patients.
Collapse
Affiliation(s)
- Russell C Hardie
- Department of Electrical and Computer Engineering, University of Dayton, Dayton Ohio USA 45469
| | - Andrew T Trout
- Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati Ohio USA 45229-3026
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati Ohio USA 45267
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati Ohio USA 45267
| | - Jonathan R Dillman
- Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati Ohio USA 45229-3026
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati Ohio USA 45267
| | - Barath N Narayanan
- University of Dayton Research Institute, Sensor and Software Systems, 1700 South Patterson Blvd., Dayton Ohio USA 45469
| | - Aki A Tanimoto
- Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati Ohio USA 45229-3026
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati Ohio USA 45267
| |
Collapse
|
3
|
Wu J, Li R, Gan J, Zheng Q, Wang G, Tao W, Yang M, Li W, Ji G, Li W. Application of artificial intelligence in lung cancer screening: A real-world study in a Chinese physical examination population. Thorac Cancer 2024; 15:2061-2072. [PMID: 39206529 PMCID: PMC11444925 DOI: 10.1111/1759-7714.15428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2024] [Revised: 07/29/2024] [Accepted: 07/31/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND With the rapid increase of chest computed tomography (CT) images, the workload faced by radiologists has increased dramatically. It is undeniable that the use of artificial intelligence (AI) image-assisted diagnosis system in clinical treatment is a major trend in medical development. Therefore, in order to explore the value and diagnostic accuracy of the current AI system in clinical application, we aim to compare the detection and differentiation of benign and malignant pulmonary nodules between AI system and physicians, so as to provide a theoretical basis for clinical application. METHODS Our study encompassed a cohort of 23 336 patients who underwent chest low-dose spiral CT screening for lung cancer at the Health Management Center of West China Hospital. We conducted a comparative analysis between AI-assisted reading and manual interpretation, focusing on the detection and differentiation of benign and malignant pulmonary nodules. RESULTS The AI-assisted reading exhibited a significantly higher screening positive rate and probability of diagnosing malignant pulmonary nodules compared with manual interpretation (p < 0.001). Moreover, AI scanning demonstrated a markedly superior detection rate of malignant pulmonary nodules compared with manual scanning (97.2% vs. 86.4%, p < 0.001). Additionally, the lung cancer detection rate was substantially higher in the AI reading group compared with the manual reading group (98.9% vs. 90.3%, p < 0.001). CONCLUSIONS Our findings underscore the superior screening positive rate and lung cancer detection rate achieved through AI-assisted reading compared with manual interpretation. Thus, AI exhibits considerable potential as an adjunctive tool in lung cancer screening within clinical practice settings.
Collapse
Affiliation(s)
- Jiaxuan Wu
- Department of Pulmonary and Critical Care Medicine, West China HospitalSichuan UniversityChengduSichuanChina
- State Key Laboratory of Respiratory Health and MultimorbidityWest China HospitalChengduSichuanChina
- Institute of Respiratory Health and Multimorbidity, West China HospitalSichuan UniversityChengduSichuanChina
| | - Ruicen Li
- Health Management Center, General Practice Medical Center, West China HospitalSichuan UniversityChengduChina
| | - Jiadi Gan
- Department of Pulmonary and Critical Care Medicine, West China HospitalSichuan UniversityChengduSichuanChina
- State Key Laboratory of Respiratory Health and MultimorbidityWest China HospitalChengduSichuanChina
- Institute of Respiratory Health and Multimorbidity, West China HospitalSichuan UniversityChengduSichuanChina
| | - Qian Zheng
- West China Clinical Medical CollegeSichuan UniversityChengduChina
| | - Guoqing Wang
- State Key Laboratory of Biotherapy and Cancer Center, West China HospitalSichuan UniversityChengduSichuanChina
| | - Wenjuan Tao
- Institute of Hospital Management, West China HospitalSichuan UniversityChengduChina
| | - Ming Yang
- National Clinical Research Center for Geriatrics (WCH), West China HospitalSichuan UniversityChengduChina
- Center of Gerontology and Geriatrics, West China HospitalSichuan UniversityChengduChina
| | - Wenyu Li
- Health Management Center, General Practice Medical Center, West China HospitalSichuan UniversityChengduChina
| | - Guiyi Ji
- Health Management Center, General Practice Medical Center, West China HospitalSichuan UniversityChengduChina
| | - Weimin Li
- Department of Pulmonary and Critical Care Medicine, West China HospitalSichuan UniversityChengduSichuanChina
- State Key Laboratory of Respiratory Health and MultimorbidityWest China HospitalChengduSichuanChina
- Institute of Respiratory Health and Multimorbidity, West China HospitalSichuan UniversityChengduSichuanChina
- Institute of Respiratory Health, Frontiers Science Center for Disease‐related Molecular Network, West China HospitalSichuan UniversityChengduSichuanChina
- Precision Medicine Center, Precision Medicine Key Laboratory of Sichuan Province, West China HospitalSichuan UniversityChengduSichuanChina
- The Research Units of West China, Chinese Academy of Medical SciencesWest China HospitalChengduSichuanChina
| |
Collapse
|
4
|
Chang JY, Makary MS. Evolving and Novel Applications of Artificial Intelligence in Thoracic Imaging. Diagnostics (Basel) 2024; 14:1456. [PMID: 39001346 PMCID: PMC11240935 DOI: 10.3390/diagnostics14131456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 07/01/2024] [Accepted: 07/06/2024] [Indexed: 07/16/2024] Open
Abstract
The advent of artificial intelligence (AI) is revolutionizing medicine, particularly radiology. With the development of newer models, AI applications are demonstrating improved performance and versatile utility in the clinical setting. Thoracic imaging is an area of profound interest, given the prevalence of chest imaging and the significant health implications of thoracic diseases. This review aims to highlight the promising applications of AI within thoracic imaging. It examines the role of AI, including its contributions to improving diagnostic evaluation and interpretation, enhancing workflow, and aiding in invasive procedures. Next, it further highlights the current challenges and limitations faced by AI, such as the necessity of 'big data', ethical and legal considerations, and bias in representation. Lastly, it explores the potential directions for the application of AI in thoracic radiology.
Collapse
Affiliation(s)
- Jin Y Chang
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
| | - Mina S Makary
- Department of Radiology, The Ohio State University College of Medicine, Columbus, OH 43210, USA
- Division of Vascular and Interventional Radiology, Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA
| |
Collapse
|
5
|
Hardie RC, Trout AT, Dillman JR, Narayanan BN, Tanimoto AA. Performance Analysis in Children of Traditional and Deep Learning CT Lung Nodule Computer-Aided Detection Systems Trained on Adults. AJR Am J Roentgenol 2024; 222:e2330345. [PMID: 37991333 DOI: 10.2214/ajr.23.30345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2023]
Abstract
BACKGROUND. Although primary lung cancer is rare in children, chest CT is commonly performed to assess for lung metastases in children with cancer. Lung nodule computer-aided detection (CAD) systems have been designed and studied primarily using adult training data, and the efficacy of such systems when applied to pediatric patients is poorly understood. OBJECTIVE. The purpose of this study was to evaluate in children the diagnostic performance of traditional and deep learning CAD systems trained with adult data for the detection of lung nodules on chest CT scans and to compare the ability of such systems to generalize to children versus to other adults. METHODS. This retrospective study included pediatric and adult chest CT test sets. The pediatric test set comprised 59 CT scans in 59 patients (30 boys, 29 girls; mean age, 13.1 years; age range, 4-17 years), which were obtained from November 30, 2018, to August 31, 2020; lung nodules were annotated by fellowship-trained pediatric radiologists as the reference standard. The adult test set was the publicly available adult Lung Nodule Analysis (LUNA) 2016 subset 0, which contained 89 deidentified scans with previously annotated nodules. The test sets were processed through the traditional FlyerScan (github.com/rhardie1/FlyerScanCT) and deep learning Medical Open Network for Artificial Intelligence (MONAI; github.com/Project-MONAI/model-zoo/releases) lung nodule CAD systems, which had been trained on separate sets of CT scans in adults. Sensitivity and false-positive (FP) frequency were calculated for nodules measuring 3-30 mm; nonoverlapping 95% CIs indicated significant differences. RESULTS. Operating at two FPs per scan, on pediatric testing data FlyerScan and MONAI showed significantly lower detection sensitivities of 68.4% (197/288; 95% CI, 65.1-73.0%) and 53.1% (153/288; 95% CI, 46.7-58.4%), respectively, than on adult LUNA 2016 subset 0 testing data (83.9% [94/112; 95% CI, 79.1-88.0%] and 95.5% [107/112; 95% CI, 90.0-98.4%], respectively). Mean nodule size was smaller (p < .001) in the pediatric testing data (5.4 ± 3.1 [SD] mm) than in the adult LUNA 2016 subset 0 testing data (11.0 ± 6.2 mm). CONCLUSION. Adult-trained traditional and deep learning-based lung nodule CAD systems had significantly lower sensitivity for detection on pediatric data than on adult data at a matching FP frequency. The performance difference may relate to the smaller size of pediatric lung nodules. CLINICAL IMPACT. The results indicate a need for pediatric-specific lung nodule CAD systems trained on data specific to pediatric patients.
Collapse
Affiliation(s)
- Russell C Hardie
- Department of Electrical and Computer Engineering, University of Dayton, 300 College Park, Dayton, OH 45469
| | - Andrew T Trout
- Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH
| | - Jonathan R Dillman
- Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH
| | - Barath N Narayanan
- Sensor and Software Systems, University of Dayton Research Institute, Dayton, OH
| | - Aki A Tanimoto
- Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH
| |
Collapse
|
6
|
Du Y, Greuter MJW, Prokop MW, de Bock GH. Pricing and cost-saving potential for deep-learning computer-aided lung nodule detection software in CT lung cancer screening. Insights Imaging 2023; 14:208. [PMID: 38010436 PMCID: PMC10682324 DOI: 10.1186/s13244-023-01561-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 10/28/2023] [Indexed: 11/29/2023] Open
Abstract
OBJECTIVE An increasing number of commercial deep learning computer-aided detection (DL-CAD) systems are available but their cost-saving potential is largely unknown. This study aimed to gain insight into appropriate pricing for DL-CAD in different reading modes to be cost-saving and to determine the potentially most cost-effective reading mode for lung cancer screening. METHODS In three representative settings, DL-CAD was evaluated as a concurrent, pre-screening, and second reader. Scoping review was performed to estimate radiologist reading time with and without DL-CAD. Hourly cost of radiologist time was collected for the USA (€196), UK (€127), and Poland (€45), and monetary equivalence of saved time was calculated. The minimum number of screening CTs to reach break-even was calculated for one-time investment of €51,616 for DL-CAD. RESULTS Mean reading time was 162 (95% CI: 111-212) seconds per case without DL-CAD, which decreased by 77 (95% CI: 47-107) and 104 (95% CI: 71-136) seconds for DL-CAD as concurrent and pre-screening reader, respectively, and increased by 33-41 s for DL-CAD as second reader. This translates into €1.0-4.3 per-case cost for concurrent reading and €0.8-5.7 for pre-screening reading in the USA, UK, and Poland. To achieve break-even with a one-time investment, the minimum number of CT scans was 12,300-53,600 for concurrent reader, and 9400-65,000 for pre-screening reader in the three countries. CONCLUSIONS Given current pricing, DL-CAD must be priced substantially below €6 in a pay-per-case setting or used in a high-workload environment to reach break-even in lung cancer screening. DL-CAD as pre-screening reader shows the largest potential to be cost-saving. CRITICAL RELEVANCE STATEMENT Deep-learning computer-aided lung nodule detection (DL-CAD) software must be priced substantially below 6 euro in a pay-per-case setting or must be used in high-workload environments with one-time investment in order to achieve break-even. DL-CAD as a pre-screening reader has the greatest cost savings potential. KEY POINTS • DL-CAD must be substantially below €6 in a pay-per-case setting to reach break-even. • DL-CAD must be used in a high-workload screening environment to achieve break-even. • DL-CAD as a pre-screening reader shows the largest potential to be cost-saving.
Collapse
Affiliation(s)
- Yihui Du
- Department of Epidemiology and Health Statistics, School of Public Health, Hangzhou Normal University, Hangzhou, China
- Department of Epidemiology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Marcel J W Greuter
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Mathias W Prokop
- Department of Radiology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Geertruida H de Bock
- Department of Epidemiology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| |
Collapse
|
7
|
Dong Y, Li X, Yang Y, Wang M, Gao B. A Synthesizing Semantic Characteristics Lung Nodules Classification Method Based on 3D Convolutional Neural Network. Bioengineering (Basel) 2023; 10:1245. [PMID: 38002369 PMCID: PMC10669569 DOI: 10.3390/bioengineering10111245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 09/30/2023] [Accepted: 10/11/2023] [Indexed: 11/26/2023] Open
Abstract
Early detection is crucial for the survival and recovery of lung cancer patients. Computer-aided diagnosis system can assist in the early diagnosis of lung cancer by providing decision support. While deep learning methods are increasingly being applied to tasks such as CAD (Computer-aided diagnosis system), these models lack interpretability. In this paper, we propose a convolutional neural network model that combines semantic characteristics (SCCNN) to predict whether a given pulmonary nodule is malignant. The model synthesizes the advantages of multi-view, multi-task and attention modules in order to fully simulate the actual diagnostic process of radiologists. The 3D (three dimensional) multi-view samples of lung nodules are extracted by spatial sampling method. Meanwhile, semantic characteristics commonly used in radiology reports are used as an auxiliary task and serve to explain how the model interprets. The introduction of the attention module in the feature fusion stage improves the classification of lung nodules as benign or malignant. Our experimental results using the LIDC-IDRI (Lung Image Database Consortium and Image Database Resource Initiative) show that this study achieves 95.45% accuracy and 97.26% ROC (Receiver Operating Characteristic) curve area. The results show that the method we proposed not only realize the classification of benign and malignant compared to standard 3D CNN approaches but can also be used to intuitively explain how the model makes predictions, which can assist clinical diagnosis.
Collapse
Affiliation(s)
| | - Xiaoqin Li
- Faculty of Environment and Life, Beijing University of Technology, Beijing 100124, China; (Y.D.); (Y.Y.); (M.W.); (B.G.)
| | | | | | | |
Collapse
|
8
|
Lin M, Zhou Q, Lei T, Shang N, Zheng Q, He X, Wang N, Xie H. Deep learning system improved detection efficacy of fetal intracranial malformations in a randomized controlled trial. NPJ Digit Med 2023; 6:191. [PMID: 37833395 PMCID: PMC10575919 DOI: 10.1038/s41746-023-00932-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 09/25/2023] [Indexed: 10/15/2023] Open
Abstract
Congenital malformations of the central nervous system are among the most common major congenital malformations. Deep learning systems have come to the fore in prenatal diagnosis of congenital malformation, but the impact of deep learning-assisted detection of congenital intracranial malformations from fetal neurosonographic images has not been evaluated. Here we report a three-way crossover, randomized control trial (Trial Registration: ChiCTR2100048233) that assesses the efficacy of a deep learning system, the Prenatal Ultrasound Diagnosis Artificial Intelligence Conduct System (PAICS), in assisting fetal intracranial malformation detection. A total of 709 fetal neurosonographic images/videos are read interactively by 36 sonologists of different expertise levels in three reading modes: unassisted mode (without PAICS assistance), concurrent mode (using PAICS at the beginning of the assessment) and second mode (using PAICS after a fully unaided interpretation). Aided by PAICS, the average accuracy of the unassisted mode (73%) is increased by the concurrent mode (80%; P < 0.001) and the second mode (82%; P < 0.001). Correspondingly, the AUC is increased from 0.85 to 0.89 and to 0.90, respectively (P < 0.001 for all). The median read time per data is slightly increased in concurrent mode but substantially prolonged in the second mode, from 6 s to 7 s and to 11 s (P < 0.001 for all). In conclusion, PAICS in both concurrent and second modes has the potential to improve sonologists' performance in detecting fetal intracranial malformations from neurosonographic data. PAICS is more efficient when used concurrently for all readers.
Collapse
Affiliation(s)
- Meifang Lin
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Qian Zhou
- Department of Medical Statistics, Clinical Trials Unit, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China and Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Ting Lei
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Ning Shang
- Department of Ultrasound, Guangdong Women and Children Hospital, Guangzhou, Guangdong, China
| | - Qiao Zheng
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xiaoqin He
- Department of Ultrasound, Women and Children's Hospital affiliated to Xiamen University, Xiamen, Fujian, China
| | - Nan Wang
- Guangzhou Aiyunji Information Technology co., Ltd, Guangzhou, Guangdong, China.
| | - Hongning Xie
- Department of Ultrasonic Medicine, Fetal Medical Center, First Affiliated Hospital of Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
9
|
Ewals LJS, van der Wulp K, van den Borne BEEM, Pluyter JR, Jacobs I, Mavroeidis D, van der Sommen F, Nederend J. The Effects of Artificial Intelligence Assistance on the Radiologists' Assessment of Lung Nodules on CT Scans: A Systematic Review. J Clin Med 2023; 12:jcm12103536. [PMID: 37240643 DOI: 10.3390/jcm12103536] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 04/19/2023] [Accepted: 05/16/2023] [Indexed: 05/28/2023] Open
Abstract
To reduce the number of missed or misdiagnosed lung nodules on CT scans by radiologists, many Artificial Intelligence (AI) algorithms have been developed. Some algorithms are currently being implemented in clinical practice, but the question is whether radiologists and patients really benefit from the use of these novel tools. This study aimed to review how AI assistance for lung nodule assessment on CT scans affects the performances of radiologists. We searched for studies that evaluated radiologists' performances in the detection or malignancy prediction of lung nodules with and without AI assistance. Concerning detection, radiologists achieved with AI assistance a higher sensitivity and AUC, while the specificity was slightly lower. Concerning malignancy prediction, radiologists achieved with AI assistance generally a higher sensitivity, specificity and AUC. The radiologists' workflows of using the AI assistance were often only described in limited detail in the papers. As recent studies showed improved performances of radiologists with AI assistance, AI assistance for lung nodule assessment holds great promise. To achieve added value of AI tools for lung nodule assessment in clinical practice, more research is required on the clinical validation of AI tools, impact on follow-up recommendations and ways of using AI tools.
Collapse
Affiliation(s)
- Lotte J S Ewals
- Department of Radiology, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Kasper van der Wulp
- Department of Radiology, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
| | - Ben E E M van den Borne
- Department of Pulmonology, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
| | - Jon R Pluyter
- Department of Experience Design, Royal Philips, 5656 AE Eindhoven, The Netherlands
| | - Igor Jacobs
- Department of Hospital Services and Informatics, Philips Research, 5656 AE Eindhoven, The Netherlands
| | - Dimitrios Mavroeidis
- Department of Data Science, Philips Research, 5656 AE Eindhoven, The Netherlands
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
| | - Joost Nederend
- Department of Radiology, Catharina Cancer Institute, Catharina Hospital Eindhoven, 5623 EJ Eindhoven, The Netherlands
| |
Collapse
|
10
|
Iqbal S, N. Qureshi A, Li J, Mahmood T. On the Analyses of Medical Images Using Traditional Machine Learning Techniques and Convolutional Neural Networks. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:3173-3233. [PMID: 37260910 PMCID: PMC10071480 DOI: 10.1007/s11831-023-09899-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/19/2023] [Indexed: 06/02/2023]
Abstract
Convolutional neural network (CNN) has shown dissuasive accomplishment on different areas especially Object Detection, Segmentation, Reconstruction (2D and 3D), Information Retrieval, Medical Image Registration, Multi-lingual translation, Local language Processing, Anomaly Detection on video and Speech Recognition. CNN is a special type of Neural Network, which has compelling and effective learning ability to learn features at several steps during augmentation of the data. Recently, different interesting and inspiring ideas of Deep Learning (DL) such as different activation functions, hyperparameter optimization, regularization, momentum and loss functions has improved the performance, operation and execution of CNN Different internal architecture innovation of CNN and different representational style of CNN has significantly improved the performance. This survey focuses on internal taxonomy of deep learning, different models of vonvolutional neural network, especially depth and width of models and in addition CNN components, applications and current challenges of deep learning.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586 Kingdom of Saudi Arabia
| |
Collapse
|
11
|
Implementation of artificial intelligence in thoracic imaging-a what, how, and why guide from the European Society of Thoracic Imaging (ESTI). Eur Radiol 2023:10.1007/s00330-023-09409-2. [PMID: 36729173 PMCID: PMC9892666 DOI: 10.1007/s00330-023-09409-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2022] [Revised: 11/29/2022] [Accepted: 12/27/2022] [Indexed: 02/03/2023]
Abstract
This statement from the European Society of Thoracic imaging (ESTI) explains and summarises the essentials for understanding and implementing Artificial intelligence (AI) in clinical practice in thoracic radiology departments. This document discusses the current AI scientific evidence in thoracic imaging, its potential clinical utility, implementation and costs, training requirements and validation, its' effect on the training of new radiologists, post-implementation issues, and medico-legal and ethical issues. All these issues have to be addressed and overcome, for AI to become implemented clinically in thoracic radiology. KEY POINTS: • Assessing the datasets used for training and validation of the AI system is essential. • A departmental strategy and business plan which includes continuing quality assurance of AI system and a sustainable financial plan is important for successful implementation. • Awareness of the negative effect on training of new radiologists is vital.
Collapse
|
12
|
Affiliation(s)
- Theresa C McLoud
- From the Department of Radiology, Harvard Medical School, Massachusetts General Hospital, 55 Fruit St, MZ-FND 216, Boston, MA 02114-2696 (T.C.M.); and Department of Radiology, Mayo Clinic College of Medicine and Science, Mayo Clinic Florida, Jacksonville, Fla (B.P.L.)
| | - Brent P Little
- From the Department of Radiology, Harvard Medical School, Massachusetts General Hospital, 55 Fruit St, MZ-FND 216, Boston, MA 02114-2696 (T.C.M.); and Department of Radiology, Mayo Clinic College of Medicine and Science, Mayo Clinic Florida, Jacksonville, Fla (B.P.L.)
| |
Collapse
|
13
|
Kato S, Amemiya S, Takao H, Yamashita H, Sakamoto N, Miki S, Watanabe Y, Suzuki F, Fujimoto K, Mizuki M, Abe O. Computer-aided detection improves brain metastasis identification on non-enhanced CT in less experienced radiologists. Acta Radiol 2022; 64:1958-1965. [PMID: 36426577 DOI: 10.1177/02841851221139124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Background Brain metastases (BMs) are the most common intracranial tumors causing neurological complications associated with significant morbidity and mortality. Purpose To evaluate the effect of computer-aided detection (CAD) on the performance of observers in detecting BMs on non-enhanced computed tomography (NECT). Material and Methods Three less experienced and three experienced radiologists interpreted 30 NECT scans with 89 BMs in 25 cases to detect BMs with and without the assistance of CAD. The observers’ sensitivity, number of false positives (FPs), positive predictive value (PPV), and reading time with and without CAD were compared using paired t-tests. The sensitivity of CAD and the observers were compared using a one-sample t-test Results With CAD, less experienced radiologists’ sensitivity significantly increased from 27.7% ± 4.6% to 32.6% ± 4.8% ( P = 0.007), while the experienced radiologists’ sensitivity did not show a significant difference (from 33.3% ± 3.5% to 31.9% ± 3.7%; P = 0.54). There was no significant difference between conditions with CAD and without CAD for FPs (less experienced radiologists: 23.0 ± 10.4 and 25.0 ± 9.3; P = 0.32; experienced radiologists: 18.3 ± 7.4 and 17.3 ± 6.7; P = 0.76) and PPVs (less experienced radiologists: 57.9% ± 8.3% and 50.9% ± 7.0%; P = 0.14; experienced radiologists: 61.8% ± 12.7% and 64.0% ± 12.1%; P = 0.69). There were no significant differences in reading time with and without CAD (85.0 ± 45.6 s and 73.7 ± 36.7 s; P = 0.09). The sensitivity of CAD was 47.2% (with a PPV of 8.9%), which was significantly higher than that of any radiologist ( P < 0.001). Conclusion CAD improved BM detection sensitivity on NECT without increasing FPs or reading time among less experienced radiologists, but this was not the case among experienced radiologists.
Collapse
Affiliation(s)
- Shimpei Kato
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Shiori Amemiya
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Hidemasa Takao
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Hiroshi Yamashita
- Department of Radiology, Teikyo University Hospital, Kawasaki, Kanagawa, Japan
| | - Naoya Sakamoto
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Soichiro Miki
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Yusuke Watanabe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Fumio Suzuki
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Kotaro Fujimoto
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Masumi Mizuki
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| |
Collapse
|
14
|
Wang L. Deep Learning Techniques to Diagnose Lung Cancer. Cancers (Basel) 2022; 14:5569. [PMID: 36428662 PMCID: PMC9688236 DOI: 10.3390/cancers14225569] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 11/11/2022] [Accepted: 11/11/2022] [Indexed: 11/15/2022] Open
Abstract
Medical imaging tools are essential in early-stage lung cancer diagnostics and the monitoring of lung cancer during treatment. Various medical imaging modalities, such as chest X-ray, magnetic resonance imaging, positron emission tomography, computed tomography, and molecular imaging techniques, have been extensively studied for lung cancer detection. These techniques have some limitations, including not classifying cancer images automatically, which is unsuitable for patients with other pathologies. It is urgently necessary to develop a sensitive and accurate approach to the early diagnosis of lung cancer. Deep learning is one of the fastest-growing topics in medical imaging, with rapidly emerging applications spanning medical image-based and textural data modalities. With the help of deep learning-based medical imaging tools, clinicians can detect and classify lung nodules more accurately and quickly. This paper presents the recent development of deep learning-based imaging techniques for early lung cancer detection.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen 518118, China
| |
Collapse
|
15
|
Liu JA, Yang IY, Tsai EB. Artificial Intelligence (AI) for Lung Nodules, From the AJR Special Series on AI Applications. AJR Am J Roentgenol 2022; 219:703-712. [PMID: 35544377 DOI: 10.2214/ajr.22.27487] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Interest in artificial intelligence (AI) applications for lung nodules continues to grow among radiologists, particularly with the expanding eligibility criteria and clinical utilization of lung cancer screening CT. AI has been heavily investigated for detecting and characterizing lung nodules and for guiding prognostic assessment. AI tools have also been used for image postprocessing (e.g., rib suppression on radiography or vessel suppression on CT) and for noninterpretive aspects of reporting and workflow, including management of nodule follow-up. Despite growing interest in and rapid development of AI tools and FDA approval of AI tools for pulmonary nodule evaluation, integration into clinical practice has been limited. Challenges to clinical adoption have included concerns about generalizability, regulatory issues, technical hurdles in implementation, and human skepticism. Further validation of AI tools for clinical use and demonstration of benefit in terms of patient-oriented outcomes also are needed. This article provides an overview of potential applications of AI tools in the imaging evaluation of lung nodules and discusses the challenges faced by practices interested in clinical implementation of such tools.
Collapse
Affiliation(s)
- Jonathan A Liu
- Department of Radiology, Stanford University School of Medicine, 453 Quarry Rd, MC 5659, Palo Alto, CA 94304
- Present affiliation: Department of Radiology, University of California, San Francisco, San Francisco, CA
| | - Issac Y Yang
- Department of Radiology, Stanford University School of Medicine, 453 Quarry Rd, MC 5659, Palo Alto, CA 94304
| | - Emily B Tsai
- Department of Radiology, Stanford University School of Medicine, 453 Quarry Rd, MC 5659, Palo Alto, CA 94304
| |
Collapse
|
16
|
Hempel HL, Engbersen MP, Wakkie J, van Kelckhoven BJ, de Monyé W. Higher agreement between readers with deep learning CAD software for reporting pulmonary nodules on CT. Eur J Radiol Open 2022; 9:100435. [PMID: 35942077 PMCID: PMC9356194 DOI: 10.1016/j.ejro.2022.100435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 07/21/2022] [Accepted: 07/28/2022] [Indexed: 12/01/2022] Open
Abstract
Purpose The aim was to evaluate the impact of CAD software on the pulmonary nodule management recommendations of radiologists in a cohort of patients with incidentally detected nodules on CT. Methods For this retrospective study, two radiologists independently assessed 50 chest CT cases for pulmonary nodules to determine the appropriate management recommendation, twice, unaided and aided by CAD with a 6-month washout period. Management recommendations were given in a 4-point grade based on the BTS guidelines. Both reading sessions were recorded to determine the reading times per case. A reduction in reading times per session was tested with a one-tailed paired t-test, and a linear weighted kappa was calculated to assess interobserver agreement. Results The mean age of the included patients was 65.0 ± 10.9. Twenty patients were male (40 %). For both readers 1 and 2, a significant reduction of reading time was observed of 33.4 % and 42.6 % (p < 0.001, p < 0.001). The linear weighted kappa between readers unaided was 0.61. Readers showed a better agreement with the aid of CAD, namely by a kappa of 0.84. The mean reading time per case was 226.4 ± 113.2 and 320.8 ± 164.2 s unaided and 150.8 ± 74.2 and 184.2 ± 125.3 s aided by CAD software for readers 1 and 2, respectively. Conclusion A dedicated CAD system for aiding in pulmonary nodule reporting may help improve the uniformity of management recommendations in clinical practice.
Collapse
Affiliation(s)
- H L Hempel
- Department of Radiology, Spaarne Gasthuis Hospital, Hoofddorp, the Netherlands.,Aidence B.V., Amsterdam, the Netherlands
| | - M P Engbersen
- Department of Radiology, Spaarne Gasthuis Hospital, Hoofddorp, the Netherlands.,Aidence B.V., Amsterdam, the Netherlands
| | - J Wakkie
- Department of Radiology, Spaarne Gasthuis Hospital, Hoofddorp, the Netherlands.,Aidence B.V., Amsterdam, the Netherlands
| | - B J van Kelckhoven
- Department of Radiology, Spaarne Gasthuis Hospital, Hoofddorp, the Netherlands.,Aidence B.V., Amsterdam, the Netherlands
| | - W de Monyé
- Department of Radiology, Spaarne Gasthuis Hospital, Hoofddorp, the Netherlands.,Aidence B.V., Amsterdam, the Netherlands
| |
Collapse
|
17
|
Min Y, Hu L, Wei L, Nie S. Computer-aided detection of pulmonary nodules based on convolutional neural networks: a review. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac568e] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 02/18/2022] [Indexed: 02/08/2023]
Abstract
Abstract
Computer-aided detection (CADe) technology has been proven to increase the detection rate of pulmonary nodules that has important clinical significance for the early diagnosis of lung cancer. In this study, we systematically review the latest techniques in pulmonary nodule CADe based on deep learning models with convolutional neural networks in computed tomography images. First, the brief descriptions and popular architecture of convolutional neural networks are introduced. Second, several common public databases and evaluation metrics are briefly described. Third, state-of-the-art approaches with excellent performances are selected. Subsequently, we combine the clinical diagnostic process and the traditional four steps of pulmonary nodule CADe into two stages, namely, data preprocessing and image analysis. Further, the major optimizations of deep learning models and algorithms are highlighted according to the progressive evaluation effect of each method, and some clinical evidence is added. Finally, various methods are summarized and compared. The innovative or valuable contributions of each method are expected to guide future research directions. The analyzed results show that deep learning-based methods significantly transformed the detection of pulmonary nodules, and the design of these methods can be inspired by clinical imaging diagnostic procedures. Moreover, focusing on the image analysis stage will result in improved returns. In particular, optimal results can be achieved by optimizing the steps of candidate nodule generation and false positive reduction. End-to-end methods, with greater operating speeds and lower computational consumptions, are superior to other methods in CADe of pulmonary nodules.
Collapse
|
18
|
Zhang Y, Jiang B, Zhang L, Greuter MJW, de Bock GH, Zhang H, Xie X. Lung Nodule Detectability of Artificial Intelligence-assisted CT Image Reading in Lung Cancer Screening. Curr Med Imaging 2021; 18:327-334. [PMID: 34365951 DOI: 10.2174/1573405617666210806125953] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 06/11/2021] [Accepted: 06/17/2021] [Indexed: 11/22/2022]
Abstract
BACKGROUND Artificial intelligence (AI)-based automatic lung nodule detection system improves the detection rate of nodules. It is important to evaluate the clinical value of AI system by comparing AI-assisted nodule detection with actu-al radiology reports. OBJECTIVE To compare the detection rate of lung nodules between the actual radiology reports and AI-assisted reading in lung cancer CT screening. METHODS Participants in chest CT screening from November to December 2019 were retrospectively included. In the real-world radiologist observation, 14 residents and 15 radiologists participated to finalize radiology reports. In AI-assisted reading, one resident and one radiologist reevaluated all subjects with the assistance of an AI system to lo-cate and measure the detected lung nodules. A reading panel determined the type and number of detected lung nodules between these two methods. RESULTS In 860 participants (57±7 years), the reading panel confirmed 250 patients with >1 solid nodule, while radiolo-gists observed 131, lower than 247 by AI-assisted reading (p<0.001). The panel confirmed 111 patients with >1 non-solid nodule, whereas radiologist observation identified 28, lower than 110 by AI-assisted reading (p<0.001). The accuracy and sensitivity of radiologist observation for solid nodules were 86.2% and 52.4%, lower than 99.1% and 98.8% by AI-assisted reading, respectively. These metrics were 90.4% and 25.2% for non-solid nodules, lower than 98.8% and 99.1% by AI-assisted reading, respectively. CONCLUSION Comparing with the actual radiology reports, AI-assisted reading greatly improves the accuracy and sensi-tivity of nodule detection in chest CT, which benefits lung nodule detection, especially for non-solid nodules.
Collapse
Affiliation(s)
- Yaping Zhang
- Department of Radiology, Shanghai General Hospital of Nanjing Medical University, Haining Rd.100, Shanghai 200080. China
| | - Beibei Jiang
- Department of Radiology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Haining Rd.100, Shanghai 200080. 0
| | - Lu Zhang
- Department of Radiology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Haining Rd.100, Shanghai 200080. 0
| | - Marcel J W Greuter
- Department of Radiology, University of Groningen, University Medical Center Groningen, Hanzeplein 1, 9713GZ Gro-ningen. Netherlands
| | - Geertruida H de Bock
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Hanzeplein 1, 9713GZ Groningen. Netherlands
| | - Hao Zhang
- Department of Radiology, Shanghai General Hospital of Nanjing Medical University, Haining Rd.100, Shanghai 200080. China
| | - Xueqian Xie
- Department of Radiology, Shanghai General Hospital of Nanjing Medical University, Haining Rd.100, Shanghai 200080. China
| |
Collapse
|