1
|
Tao X, Reed WM, Li T, Brennan PC, Gandomkar Z. Optimizing mammography interpretation education: leveraging deep learning for cohort-specific error detection to enhance radiologist training. J Med Imaging (Bellingham) 2024; 11:055502. [PMID: 39372519 PMCID: PMC11447382 DOI: 10.1117/1.jmi.11.5.055502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Revised: 07/31/2024] [Accepted: 09/10/2024] [Indexed: 10/08/2024] Open
Abstract
Purpose Accurate interpretation of mammograms presents challenges. Tailoring mammography training to reader profiles holds the promise of an effective strategy to reduce these errors. This proof-of-concept study investigated the feasibility of employing convolutional neural networks (CNNs) with transfer learning to categorize regions associated with false-positive (FP) errors within screening mammograms into categories of "low" or "high" likelihood of being a false-positive detection for radiologists sharing similar geographic characteristics. Approach Mammography test sets assessed by two geographically distant cohorts of radiologists (cohorts A and B) were collected. FP patches within these mammograms were segmented and categorized as "difficult" or "easy" based on the number of readers committing FP errors. Patches outside 1.5 times the interquartile range above the upper quartile were labeled as difficult, whereas the remaining patches were labeled as easy. Using transfer learning, a patch-wise CNN model for binary patch classification was developed utilizing ResNet as the feature extractor, with modified fully connected layers for the target task. Model performance was assessed using 10-fold cross-validation. Results Compared with other architectures, the transferred ResNet-50 achieved the highest performance, obtaining receiver operating characteristics area under the curve values of 0.933 ( ± 0.012 ) and 0.975 ( ± 0.011 ) on the validation sets for cohorts A and B, respectively. Conclusions The findings highlight the feasibility of employing CNN-based transfer learning to predict the difficulty levels of local FP patches in screening mammograms for specific radiologist cohort with similar geographic characteristics.
Collapse
Affiliation(s)
- Xuetong Tao
- The University of Sydney, Faculty of Health Sciences, Discipline of Medical Imaging Science, Sydney, New South Wales, Australia
| | - Warren M. Reed
- The University of Sydney, Faculty of Health Sciences, Discipline of Medical Imaging Science, Sydney, New South Wales, Australia
| | - Tong Li
- The University of Sydney a joint venture with Cancer Council NSW, The Daffodil Centre, Woolloomooloo, New South Wales, Australia
- The University of Sydney, Sydney School of Public Health, Faculty of Medicine and Health, Sydney, New South Wales, Australia
| | - Patrick C. Brennan
- The University of Sydney, Faculty of Health Sciences, Discipline of Medical Imaging Science, Sydney, New South Wales, Australia
| | - Ziba Gandomkar
- The University of Sydney, Faculty of Health Sciences, Discipline of Medical Imaging Science, Sydney, New South Wales, Australia
| |
Collapse
|
2
|
Ding X, Huang Y, Zhao Y, Tian X, Feng G, Gao Z. Transfer learning for anatomical structure segmentation in otorhinolaryngology microsurgery. Int J Med Robot 2024; 20:e2634. [PMID: 38767083 DOI: 10.1002/rcs.2634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 05/22/2024]
Abstract
BACKGROUND Reducing the annotation burden is an active and meaningful area of artificial intelligence (AI) research. METHODS Multiple datasets for the segmentation of two landmarks were constructed based on 41 257 labelled images and 6 different microsurgical scenarios. These datasets were trained using the multi-stage transfer learning (TL) methodology. RESULTS The multi-stage TL enhanced segmentation performance over baseline (mIOU 0.6892 vs. 0.8869). Besides, Convolutional Neural Networks (CNNs) achieved a robust performance (mIOU 0.8917 vs. 0.8603) even when the training dataset size was reduced from 90% (30 078 images) to 10% (3342 images). When directly applying the weight from one certain surgical scenario to recognise the same target in images of other scenarios without training, CNNs still obtained an optimal mIOU of 0.6190 ± 0.0789. CONCLUSIONS Model performance can be improved with TL in datasets with reduced size and increased complexity. It is feasible for data-based domain adaptation among different microsurgical fields.
Collapse
Affiliation(s)
- Xin Ding
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Yu Huang
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Yang Zhao
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Xu Tian
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Guodong Feng
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| | - Zhiqiang Gao
- Department of Otorhinolaryngology Head and Neck Surgery, The Peking Union Medical College Hospital, Beijing, China
| |
Collapse
|
3
|
Ramacciotti LS, Hershenhouse JS, Mokhtar D, Paralkar D, Kaneko M, Eppler M, Gill K, Mogoulianitis V, Duddalwar V, Abreu AL, Gill I, Cacciamani GE. Comprehensive Assessment of MRI-based Artificial Intelligence Frameworks Performance in the Detection, Segmentation, and Classification of Prostate Lesions Using Open-Source Databases. Urol Clin North Am 2024; 51:131-161. [PMID: 37945098 DOI: 10.1016/j.ucl.2023.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
Numerous MRI-based artificial intelligence (AI) frameworks have been designed for prostate cancer lesion detection, segmentation, and classification via MRI as a result of intrareader and interreader variability that is inherent to traditional interpretation. Open-source data sets have been released with the intention of providing freely available MRIs for the testing of diverse AI frameworks in automated or semiautomated tasks. Here, an in-depth assessment of the performance of MRI-based AI frameworks for detecting, segmenting, and classifying prostate lesions using open-source databases was performed. Among 17 data sets, 12 were specific to prostate cancer detection/classification, with 52 studies meeting the inclusion criteria.
Collapse
Affiliation(s)
- Lorenzo Storino Ramacciotti
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Jacob S Hershenhouse
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Daniel Mokhtar
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Divyangi Paralkar
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Masatomo Kaneko
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Urology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Michael Eppler
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Karanvir Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Vasileios Mogoulianitis
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Vinay Duddalwar
- Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Andre L Abreu
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Inderbir Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Giovanni E Cacciamani
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Artificial Intelligence Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA; Center for Image-Guided and Focal Therapy for Prostate Cancer, Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA; Department of Radiology, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
4
|
Liu P, Sun Y, Zhao X, Yan Y. Deep learning algorithm performance in contouring head and neck organs at risk: a systematic review and single-arm meta-analysis. Biomed Eng Online 2023; 22:104. [PMID: 37915046 PMCID: PMC10621161 DOI: 10.1186/s12938-023-01159-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 09/21/2023] [Indexed: 11/03/2023] Open
Abstract
PURPOSE The contouring of organs at risk (OARs) in head and neck cancer radiation treatment planning is a crucial, yet repetitive and time-consuming process. Recent studies have applied deep learning (DL) algorithms to automatically contour head and neck OARs. This study aims to conduct a systematic review and meta-analysis to summarize and analyze the performance of DL algorithms in contouring head and neck OARs. The objective is to assess the advantages and limitations of DL algorithms in contour planning of head and neck OARs. METHODS This study conducted a literature search of Pubmed, Embase and Cochrane Library databases, to include studies related to DL contouring head and neck OARs, and the dice similarity coefficient (DSC) of four categories of OARs from the results of each study are selected as effect sizes for meta-analysis. Furthermore, this study conducted a subgroup analysis of OARs characterized by image modality and image type. RESULTS 149 articles were retrieved, and 22 studies were included in the meta-analysis after excluding duplicate literature, primary screening, and re-screening. The combined effect sizes of DSC for brainstem, spinal cord, mandible, left eye, right eye, left optic nerve, right optic nerve, optic chiasm, left parotid, right parotid, left submandibular, and right submandibular are 0.87, 0.83, 0.92, 0.90, 0.90, 0.71, 0.74, 0.62, 0.85, 0.85, 0.82, and 0.82, respectively. For subgroup analysis, the combined effect sizes for segmentation of the brainstem, mandible, left optic nerve, and left parotid gland using CT and MRI images are 0.86/0.92, 0.92/0.90, 0.71/0.73, and 0.84/0.87, respectively. Pooled effect sizes using 2D and 3D images of the brainstem, mandible, left optic nerve, and left parotid gland for contouring are 0.88/0.87, 0.92/0.92, 0.75/0.71 and 0.87/0.85. CONCLUSIONS The use of automated contouring technology based on DL algorithms is an essential tool for contouring head and neck OARs, achieving high accuracy, reducing the workload of clinical radiation oncologists, and providing individualized, standardized, and refined treatment plans for implementing "precision radiotherapy". Improving DL performance requires the construction of high-quality data sets and enhancing algorithm optimization and innovation.
Collapse
Affiliation(s)
- Peiru Liu
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China
- Beifang Hospital of China Medical University, Shenyang, China
| | - Ying Sun
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China
| | - Xinzhuo Zhao
- Shenyang University of Technology, School of Electrical Engineering,, Shenyang, China
| | - Ying Yan
- General Hospital of Northern Theater Command, Department of Radiation Oncology, Shenyang, China.
| |
Collapse
|
5
|
Ajilisa OA, Jagathy Raj VP, Sabu MK. A Deep Learning Framework for the Characterization of Thyroid Nodules from Ultrasound Images Using Improved Inception Network and Multi-Level Transfer Learning. Diagnostics (Basel) 2023; 13:2463. [PMID: 37510206 PMCID: PMC10378664 DOI: 10.3390/diagnostics13142463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 07/07/2023] [Accepted: 07/13/2023] [Indexed: 07/30/2023] Open
Abstract
In the past few years, deep learning has gained increasingly widespread attention and has been applied to diagnosing benign and malignant thyroid nodules. It is difficult to acquire sufficient medical images, resulting in insufficient data, which hinders the development of an efficient deep-learning model. In this paper, we developed a deep-learning-based characterization framework to differentiate malignant and benign nodules from the thyroid ultrasound images. This approach improves the recognition accuracy of the inception network by combining squeeze and excitation networks with the inception modules. We have also integrated the concept of multi-level transfer learning using breast ultrasound images as a bridge dataset. This transfer learning approach addresses the issues regarding domain differences between natural images and ultrasound images during transfer learning. This paper aimed to investigate how the entire framework could help radiologists improve diagnostic performance and avoid unnecessary fine-needle aspiration. The proposed approach based on multi-level transfer learning and improved inception blocks achieved higher precision (0.9057 for the benign class and 0.9667 for the malignant class), recall (0.9796 for the benign class and 0.8529 for malignant), and F1-score (0.9412 for benign class and 0.9062 for malignant class). It also obtained an AUC value of 0.9537, which is higher than that of the single-level transfer learning method. The experimental results show that this model can achieve satisfactory classification accuracy comparable to experienced radiologists. Using this model, we can save time and effort as well as deliver potential clinical application value.
Collapse
Affiliation(s)
- O A Ajilisa
- Department of Computer Applications, Cochin University of Science and Technology, South Kalamassery, Kochi 682022, Kerala, India
| | - V P Jagathy Raj
- School of Management Studies, Cochin University of Science and Technology, South Kalamassery, Kochi 682022, Kerala, India
| | - M K Sabu
- Department of Computer Applications, Cochin University of Science and Technology, South Kalamassery, Kochi 682022, Kerala, India
| |
Collapse
|