1
|
Africano G, Arponen O, Rinta-Kiikka I, Pertuz S. Transfer learning for the generalization of artificial intelligence in breast cancer detection: a case-control study. Acta Radiol 2024; 65:334-340. [PMID: 38115699 DOI: 10.1177/02841851231218960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2023]
Abstract
BACKGROUND Some researchers have questioned whether artificial intelligence (AI) systems maintain their performance when used for women from populations not considered during the development of the system. PURPOSE To evaluate the impact of transfer learning as a way of improving the generalization of AI systems in the detection of breast cancer. MATERIAL AND METHODS This retrospective case-control Finnish study involved 191 women diagnosed with breast cancer and 191 matched healthy controls. We selected a state-of-the-art AI system for breast cancer detection trained using a large US dataset. The selected baseline system was evaluated in two experimental settings. First, we examined our private Finnish sample as an independent test set that had not been considered in the development of the system (unseen population). Second, the baseline system was retrained to attempt to improve its performance in the unseen population by means of transfer learning. To analyze performance, we used areas under the receiver operating characteristic curve (AUCs) with DeLong's test. RESULTS Two versions of the baseline system were considered: ImageOnly and Heatmaps. The ImageOnly and Heatmaps versions yielded mean AUC values of 0.82±0.008 and 0.88±0.003 in the US dataset and 0.56 (95% CI=0.50-0.62) and 0.72 (95% CI=0.67-0.77) when evaluated in the unseen population, respectively. The retrained systems achieved AUC values of 0.61 (95% CI=0.55-0.66) and 0.69 (95% CI=0.64-0.75), respectively. There was no statistical difference between the baseline system and the retrained system. CONCLUSION Transfer learning with a small study sample did not yield a significant improvement in the generalization of the system.
Collapse
Affiliation(s)
- Gerson Africano
- School of Electrical, Electronics and Telecommunications Engineering, Universidad Industrial de Santander, Bucaramanga, Colombia
| | - Otso Arponen
- Department of Radiology, Tampere University Hospital, Tampere, Finland
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| | - Irina Rinta-Kiikka
- Department of Radiology, Tampere University Hospital, Tampere, Finland
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| | - Said Pertuz
- School of Electrical, Electronics and Telecommunications Engineering, Universidad Industrial de Santander, Bucaramanga, Colombia
| |
Collapse
|
2
|
Qasem A, Qin G, Zhou Z. AMS-U-Net: automatic mass segmentation in digital breast tomosynthesis via U-Net. J Med Imaging (Bellingham) 2024; 11:024005. [PMID: 38525294 PMCID: PMC10960181 DOI: 10.1117/1.jmi.11.2.024005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Revised: 03/07/2024] [Accepted: 03/11/2024] [Indexed: 03/26/2024] Open
Abstract
Purpose The objective of this study was to develop a fully automatic mass segmentation method called AMS-U-Net for digital breast tomosynthesis (DBT), a popular breast cancer screening imaging modality. The aim was to address the challenges posed by the increasing number of slices in DBT, which leads to higher mass contouring workload and decreased treatment efficiency. Approach The study used 50 slices from different DBT volumes for evaluation. The AMS-U-Net approach consisted of four stages: image pre-processing, AMS-U-Net training, image segmentation, and post-processing. The model performance was evaluated by calculating the true positive ratio (TPR), false positive ratio (FPR), F-score, intersection over union (IoU), and 95% Hausdorff distance (pixels) as they are appropriate for datasets with class imbalance. Results The model achieved 0.911, 0.003, 0.911, 0.900, 5.82 for TPR, FPR, F-score, IoU, and 95% Hausdorff distance, respectively. Conclusions The AMS-U-Net model demonstrated impressive visual and quantitative results, achieving high accuracy in mass segmentation without the need for human interaction. This capability has the potential to significantly increase clinical efficiency and workflow in DBT for breast cancer screening.
Collapse
Affiliation(s)
- Ahmad Qasem
- University of Kansas Medical Center, The Reliable Intelligence and Medical Innovation Laboratory (RIMI Lab), Department of Biostatistics & Data Science, Kansas City, Kansas, United States
| | - Genggeng Qin
- Nanfang Hospital, Southern Medical University, Department of Radiology, Guangzhou, China
| | - Zhiguo Zhou
- University of Kansas Medical Center, The Reliable Intelligence and Medical Innovation Laboratory (RIMI Lab), Department of Biostatistics & Data Science, Kansas City, Kansas, United States
- University of Kansas Cancer Center, Kansas City, Kansas, United States
| |
Collapse
|
3
|
Wang L. Mammography with deep learning for breast cancer detection. Front Oncol 2024; 14:1281922. [PMID: 38410114 PMCID: PMC10894909 DOI: 10.3389/fonc.2024.1281922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/19/2024] [Indexed: 02/28/2024] Open
Abstract
X-ray mammography is currently considered the golden standard method for breast cancer screening, however, it has limitations in terms of sensitivity and specificity. With the rapid advancements in deep learning techniques, it is possible to customize mammography for each patient, providing more accurate information for risk assessment, prognosis, and treatment planning. This paper aims to study the recent achievements of deep learning-based mammography for breast cancer detection and classification. This review paper highlights the potential of deep learning-assisted X-ray mammography in improving the accuracy of breast cancer screening. While the potential benefits are clear, it is essential to address the challenges associated with implementing this technology in clinical settings. Future research should focus on refining deep learning algorithms, ensuring data privacy, improving model interpretability, and establishing generalizability to successfully integrate deep learning-assisted mammography into routine breast cancer screening programs. It is hoped that the research findings will assist investigators, engineers, and clinicians in developing more effective breast imaging tools that provide accurate diagnosis, sensitivity, and specificity for breast cancer.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| |
Collapse
|
4
|
Zuhair V, Babar A, Ali R, Oduoye MO, Noor Z, Chris K, Okon II, Rehman LU. Exploring the Impact of Artificial Intelligence on Global Health and Enhancing Healthcare in Developing Nations. J Prim Care Community Health 2024; 15:21501319241245847. [PMID: 38605668 PMCID: PMC11010755 DOI: 10.1177/21501319241245847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 03/19/2024] [Accepted: 03/21/2024] [Indexed: 04/13/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI), which combines computer science with extensive datasets, seeks to mimic human-like intelligence. Subsets of AI are being applied in almost all fields of medicine and surgery. AIM This review focuses on the applications of AI in healthcare settings in developing countries, designed to underscore its significance by comprehensively outlining the advancements made thus far, the shortcomings encountered in AI applications, the present status of AI integration, persistent challenges, and innovative strategies to surmount them. METHODOLOGY Articles from PubMed, Google Scholar, and Cochrane were searched from 2000 to 2023 with keywords including AI and healthcare, focusing on multiple medical specialties. RESULTS The increasing role of AI in diagnosis, prognosis prediction, and patient management, as well as hospital management and community healthcare, has made the overall healthcare system more efficient, especially in the high patient load setups and resource-limited areas of developing countries where patient care is often compromised. However, challenges, including low adoption rates and the absence of standardized guidelines, high installation and maintenance costs of equipment, poor transportation and connectivvity issues hinder AI's full use in healthcare. CONCLUSION Despite these challenges, AI holds a promising future in healthcare. Adequate knowledge and expertise of healthcare professionals for the use of AI technology in healthcare is imperative in developing nations.
Collapse
Affiliation(s)
- Varisha Zuhair
- Jinnah Sindh Medical University, Karachi, Sindh, Pakistan
| | - Areesha Babar
- Jinnah Sindh Medical University, Karachi, Sindh, Pakistan
| | - Rabbiya Ali
- Jinnah Sindh Medical University, Karachi, Sindh, Pakistan
| | - Malik Olatunde Oduoye
- The Medical Research Circle, (MedReC), Gisenyi, Goma, Democratic Republic of the Congo
| | - Zainab Noor
- Institute of Dentistry CMH Lahore Medical College, Lahore, Punjab, Pakistan
| | - Kitumaini Chris
- The Medical Research Circle, (MedReC), Gisenyi, Goma, Democratic Republic of the Congo
- Université Libre des Pays des Grands-Lacs Goma, Noth-Kivu, Democratic Republic of the Congo
| | - Inibehe Ime Okon
- The Medical Research Circle, (MedReC), Gisenyi, Goma, Democratic Republic of the Congo
- NiMSA SCOPH, Uyo, Akwa-Ibom State, Nigeria
| | | |
Collapse
|
5
|
Hussain S, Lafarga-Osuna Y, Ali M, Naseem U, Ahmed M, Tamez-Peña JG. Deep learning, radiomics and radiogenomics applications in the digital breast tomosynthesis: a systematic review. BMC Bioinformatics 2023; 24:401. [PMID: 37884877 PMCID: PMC10605943 DOI: 10.1186/s12859-023-05515-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 10/02/2023] [Indexed: 10/28/2023] Open
Abstract
BACKGROUND Recent advancements in computing power and state-of-the-art algorithms have helped in more accessible and accurate diagnosis of numerous diseases. In addition, the development of de novo areas in imaging science, such as radiomics and radiogenomics, have been adding more to personalize healthcare to stratify patients better. These techniques associate imaging phenotypes with the related disease genes. Various imaging modalities have been used for years to diagnose breast cancer. Nonetheless, digital breast tomosynthesis (DBT), a state-of-the-art technique, has produced promising results comparatively. DBT, a 3D mammography, is replacing conventional 2D mammography rapidly. This technological advancement is key to AI algorithms for accurately interpreting medical images. OBJECTIVE AND METHODS This paper presents a comprehensive review of deep learning (DL), radiomics and radiogenomics in breast image analysis. This review focuses on DBT, its extracted synthetic mammography (SM), and full-field digital mammography (FFDM). Furthermore, this survey provides systematic knowledge about DL, radiomics, and radiogenomics for beginners and advanced-level researchers. RESULTS A total of 500 articles were identified, with 30 studies included as the set criteria. Parallel benchmarking of radiomics, radiogenomics, and DL models applied to the DBT images could allow clinicians and researchers alike to have greater awareness as they consider clinical deployment or development of new models. This review provides a comprehensive guide to understanding the current state of early breast cancer detection using DBT images. CONCLUSION Using this survey, investigators with various backgrounds can easily seek interdisciplinary science and new DL, radiomics, and radiogenomics directions towards DBT.
Collapse
Affiliation(s)
- Sadam Hussain
- School of Engineering and Sciences, Tecnológico de Monterrey, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, Mexico.
| | - Yareth Lafarga-Osuna
- School of Engineering and Sciences, Tecnológico de Monterrey, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, Mexico
| | - Mansoor Ali
- School of Engineering and Sciences, Tecnológico de Monterrey, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, Mexico
| | - Usman Naseem
- College of Science and Engineering, James Cook University, Cairns, Australia
| | - Masroor Ahmed
- School of Engineering and Sciences, Tecnológico de Monterrey, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, Mexico
| | - Jose Gerardo Tamez-Peña
- School of Medicine and Health Sciences, Tecnológico de Monterrey, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, Mexico
| |
Collapse
|
6
|
Wang SH, Chen G, Zhong X, Lin T, Shen Y, Fan X, Cao L. Global development of artificial intelligence in cancer field: a bibliometric analysis range from 1983 to 2022. Front Oncol 2023; 13:1215729. [PMID: 37519796 PMCID: PMC10382324 DOI: 10.3389/fonc.2023.1215729] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 06/26/2023] [Indexed: 08/01/2023] Open
Abstract
Background Artificial intelligence (AI) is widely applied in cancer field nowadays. The aim of this study is to explore the hotspots and trends of AI in cancer research. Methods The retrieval term includes four topic words ("tumor," "cancer," "carcinoma," and "artificial intelligence"), which were searched in the database of Web of Science from January 1983 to December 2022. Then, we documented and processed all data, including the country, continent, Journal Impact Factor, and so on using the bibliometric software. Results A total of 6,920 papers were collected and analyzed. We presented the annual publications and citations, most productive countries/regions, most influential scholars, the collaborations of journals and institutions, and research focus and hotspots in AI-based cancer research. Conclusion This study systematically summarizes the current research overview of AI in cancer research so as to lay the foundation for future research.
Collapse
Affiliation(s)
- Sui-Han Wang
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Guoqiao Chen
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Xin Zhong
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Tianyu Lin
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Yan Shen
- Department of General Surgery, The First People’s Hospital of Yu Hang District, Hangzhou, China
| | - Xiaoxiao Fan
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Liping Cao
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| |
Collapse
|
7
|
Hanis TM, Ruhaiyem NIR, Arifin WN, Haron J, Wan Abdul Rahman WF, Abdullah R, Musa KI. Developing a Supplementary Diagnostic Tool for Breast Cancer Risk Estimation Using Ensemble Transfer Learning. Diagnostics (Basel) 2023; 13:diagnostics13101780. [PMID: 37238264 DOI: 10.3390/diagnostics13101780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 03/14/2023] [Accepted: 03/23/2023] [Indexed: 05/28/2023] Open
Abstract
Breast cancer is the most prevalent cancer worldwide. Thus, it is necessary to improve the efficiency of the medical workflow of the disease. Therefore, this study aims to develop a supplementary diagnostic tool for radiologists using ensemble transfer learning and digital mammograms. The digital mammograms and their associated information were collected from the department of radiology and pathology at Hospital Universiti Sains Malaysia. Thirteen pre-trained networks were selected and tested in this study. ResNet101V2 and ResNet152 had the highest mean PR-AUC, MobileNetV3Small and ResNet152 had the highest mean precision, ResNet101 had the highest mean F1 score, and ResNet152 and ResNet152V2 had the highest mean Youden J index. Subsequently, three ensemble models were developed using the top three pre-trained networks whose ranking was based on PR-AUC values, precision, and F1 scores. The final ensemble model, which consisted of Resnet101, Resnet152, and ResNet50V2, had a mean precision value, F1 score, and Youden J index of 0.82, 0.68, and 0.12, respectively. Additionally, the final model demonstrated balanced performance across mammographic density. In conclusion, this study demonstrates the good performance of ensemble transfer learning and digital mammograms in breast cancer risk estimation. This model can be utilised as a supplementary diagnostic tool for radiologists, thus reducing their workloads and further improving the medical workflow in the screening and diagnosis of breast cancer.
Collapse
Affiliation(s)
- Tengku Muhammad Hanis
- Department of Community Medicine, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
| | | | - Wan Nor Arifin
- Biostatistics and Research Methodology Unit, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
| | - Juhara Haron
- Department of Radiology, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
- Breast Cancer Awareness and Research Unit, Hospital Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
| | - Wan Faiziah Wan Abdul Rahman
- Breast Cancer Awareness and Research Unit, Hospital Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
- Department of Pathology, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
| | - Rosni Abdullah
- School of Computer Sciences, Universiti Sains Malaysia, Gelugor 11800, Penang, Malaysia
| | - Kamarul Imran Musa
- Department of Community Medicine, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
| |
Collapse
|
8
|
Li H, Robinson K, Lan L, Baughan N, Chan CW, Embury M, Whitman GJ, El-Zein R, Bedrosian I, Giger ML. Temporal Machine Learning Analysis of Prior Mammograms for Breast Cancer Risk Prediction. Cancers (Basel) 2023; 15:2141. [PMID: 37046802 PMCID: PMC10093086 DOI: 10.3390/cancers15072141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 03/24/2023] [Accepted: 03/29/2023] [Indexed: 04/09/2023] Open
Abstract
The identification of women at risk for sporadic breast cancer remains a clinical challenge. We hypothesize that the temporal analysis of annual screening mammograms, using a long short-term memory (LSTM) network, could accurately identify women at risk of future breast cancer. Women with an imaging abnormality, which had been biopsy-confirmed to be cancer or benign, who also had antecedent imaging available were included in this case-control study. Sequences of antecedent mammograms were retrospectively collected under HIPAA-approved guidelines. Radiomic and deep-learning-based features were extracted on regions of interest placed posterior to the nipple in antecedent images. These features were input to LSTM recurrent networks to classify whether the future lesion would be malignant or benign. Classification performance was assessed using all available antecedent time-points and using a single antecedent time-point in the task of lesion classification. Classifiers incorporating multiple time-points with LSTM, based either on deep-learning-extracted features or on radiomic features, tended to perform statistically better than chance, whereas those using only a single time-point failed to show improved performance compared to chance, as judged by area under the receiver operating characteristic curves (AUC: 0.63 ± 0.05, 0.65 ± 0.05, 0.52 ± 0.06 and 0.54 ± 0.06, respectively). Lastly, similar classification performance was observed when using features extracted from the affected versus the contralateral breast in predicting future unilateral malignancy (AUC: 0.63 ± 0.05 vs. 0.59 ± 0.06 for deep-learning-extracted features; 0.65 ± 0.05 vs. 0.62 ± 0.06 for radiomic features). The results of this study suggest that the incorporation of temporal information into radiomic analyses may improve the overall classification performance through LSTM, as demonstrated by the improved discrimination of future lesions as malignant or benign. Further, our data suggest that a potential field effect, changes in the breast extending beyond the lesion itself, is present in both the affected and contralateral breasts in antecedent imaging, and, thus, the evaluation of either breast might inform on the future risk of breast cancer.
Collapse
Affiliation(s)
- Hui Li
- Department of Radiology, The University of Chicago, Chicago, IL 60637, USA; (H.L.)
| | - Kayla Robinson
- Department of Radiology, The University of Chicago, Chicago, IL 60637, USA; (H.L.)
| | - Li Lan
- Department of Radiology, The University of Chicago, Chicago, IL 60637, USA; (H.L.)
| | - Natalie Baughan
- Department of Radiology, The University of Chicago, Chicago, IL 60637, USA; (H.L.)
| | - Chun-Wai Chan
- Department of Radiology, The University of Chicago, Chicago, IL 60637, USA; (H.L.)
| | - Matthew Embury
- Department of Breast Surgical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Gary J. Whitman
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Randa El-Zein
- Department of Radiology, Houston Methodist Research Institute, Houston, TX 77030, USA
| | - Isabelle Bedrosian
- Department of Breast Surgical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Maryellen L. Giger
- Department of Radiology, The University of Chicago, Chicago, IL 60637, USA; (H.L.)
| |
Collapse
|
9
|
Konz N, Buda M, Gu H, Saha A, Yang J, Chłędowski J, Park J, Witowski J, Geras KJ, Shoshan Y, Gilboa-Solomon F, Khapun D, Ratner V, Barkan E, Ozery-Flato M, Martí R, Omigbodun A, Marasinou C, Nakhaei N, Hsu W, Sahu P, Hossain MB, Lee J, Santos C, Przelaskowski A, Kalpathy-Cramer J, Bearce B, Cha K, Farahani K, Petrick N, Hadjiiski L, Drukker K, Armato SG, Mazurowski MA. A Competition, Benchmark, Code, and Data for Using Artificial Intelligence to Detect Lesions in Digital Breast Tomosynthesis. JAMA Netw Open 2023; 6:e230524. [PMID: 36821110 PMCID: PMC9951043 DOI: 10.1001/jamanetworkopen.2023.0524] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/24/2023] Open
Abstract
IMPORTANCE An accurate and robust artificial intelligence (AI) algorithm for detecting cancer in digital breast tomosynthesis (DBT) could significantly improve detection accuracy and reduce health care costs worldwide. OBJECTIVES To make training and evaluation data for the development of AI algorithms for DBT analysis available, to develop well-defined benchmarks, and to create publicly available code for existing methods. DESIGN, SETTING, AND PARTICIPANTS This diagnostic study is based on a multi-institutional international grand challenge in which research teams developed algorithms to detect lesions in DBT. A data set of 22 032 reconstructed DBT volumes was made available to research teams. Phase 1, in which teams were provided 700 scans from the training set, 120 from the validation set, and 180 from the test set, took place from December 2020 to January 2021, and phase 2, in which teams were given the full data set, took place from May to July 2021. MAIN OUTCOMES AND MEASURES The overall performance was evaluated by mean sensitivity for biopsied lesions using only DBT volumes with biopsied lesions; ties were broken by including all DBT volumes. RESULTS A total of 8 teams participated in the challenge. The team with the highest mean sensitivity for biopsied lesions was the NYU B-Team, with 0.957 (95% CI, 0.924-0.984), and the second-place team, ZeDuS, had a mean sensitivity of 0.926 (95% CI, 0.881-0.964). When the results were aggregated, the mean sensitivity for all submitted algorithms was 0.879; for only those who participated in phase 2, it was 0.926. CONCLUSIONS AND RELEVANCE In this diagnostic study, an international competition produced algorithms with high sensitivity for using AI to detect lesions on DBT images. A standardized performance benchmark for the detection task using publicly available clinical imaging data was released, with detailed descriptions and analyses of submitted algorithms accompanied by a public release of their predictions and code for selected methods. These resources will serve as a foundation for future research on computer-assisted diagnosis methods for DBT, significantly lowering the barrier of entry for new researchers.
Collapse
Affiliation(s)
- Nicholas Konz
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
| | - Mateusz Buda
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
- Faculty of Mathematics and Information Science, Warsaw University of Technology, Warsaw, Poland
| | - Hanxue Gu
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
| | - Ashirbani Saha
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
- Department of Oncology, McMaster University, Hamilton, Ontario, Canada
| | | | - Jakub Chłędowski
- Jagiellonian University, Kraków, Poland
- Department of Radiology, NYU Grossman School of Medicine, New York, New York
| | - Jungkyu Park
- Department of Radiology, NYU Grossman School of Medicine, New York, New York
| | - Jan Witowski
- Department of Radiology, NYU Grossman School of Medicine, New York, New York
| | - Krzysztof J. Geras
- Department of Radiology, NYU Grossman School of Medicine, New York, New York
| | - Yoel Shoshan
- Medical Image Analytics, IBM Research, Haifa, Israel
| | | | - Daniel Khapun
- Medical Image Analytics, IBM Research, Haifa, Israel
| | - Vadim Ratner
- Medical Image Analytics, IBM Research, Haifa, Israel
| | - Ella Barkan
- Medical Image Analytics, IBM Research, Haifa, Israel
| | | | - Robert Martí
- Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
| | - Akinyinka Omigbodun
- Medical and Imaging Informatics Group, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles
| | - Chrysostomos Marasinou
- Medical and Imaging Informatics Group, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles
| | - Noor Nakhaei
- Medical and Imaging Informatics Group, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles
| | - William Hsu
- Medical and Imaging Informatics Group, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles
- Department of Bioengineering, University of California Los Angeles Samueli School of Engineering
| | - Pranjal Sahu
- Department of Computer Science, Stony Brook University, Stony Brook, New York
| | - Md Belayat Hossain
- Department of Radiology, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Juhun Lee
- Department of Radiology, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Carlos Santos
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
| | - Artur Przelaskowski
- Faculty of Mathematics and Information Science, Warsaw University of Technology, Warsaw, Poland
| | - Jayashree Kalpathy-Cramer
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown
| | - Benjamin Bearce
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown
| | - Kenny Cha
- US Food and Drug Administration, Silver Spring, Maryland
| | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute, Bethesda, Maryland
| | | | | | - Karen Drukker
- Department of Radiology, University of Chicago, Chicago, Illinois
| | - Samuel G. Armato
- Department of Radiology, University of Chicago, Chicago, Illinois
| | - Maciej A. Mazurowski
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
- Department of Computer Science, Duke University, Durham, North Carolina
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, North Carolina
| |
Collapse
|
10
|
Goldberg JE, Reig B, Lewin AA, Gao Y, Heacock L, Heller SL, Moy L. New Horizons: Artificial Intelligence for Digital Breast Tomosynthesis. Radiographics 2023; 43:e220060. [DOI: 10.1148/rg.220060] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Julia E. Goldberg
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Beatriu Reig
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Alana A. Lewin
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Yiming Gao
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Laura Heacock
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Samantha L. Heller
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| | - Linda Moy
- From the Department of Radiology, NYU Langone Health, 550 1st Ave, New York, NY 10016
| |
Collapse
|
11
|
Gearhart A, Goto S, Deo RC, Powell AJ. An Automated View Classification Model for Pediatric Echocardiography Using Artificial Intelligence. J Am Soc Echocardiogr 2022; 35:1238-1246. [PMID: 36049595 PMCID: PMC9990955 DOI: 10.1016/j.echo.2022.08.009] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Revised: 07/10/2022] [Accepted: 08/12/2022] [Indexed: 10/15/2022]
Abstract
BACKGROUND View classification is a key step toward building a fully automated system for interpretation of echocardiograms. However, compared with adult echocardiograms, creating a view classification model for pediatric echocardiograms poses additional challenges, such as greater variation in anatomy, structure size, and views. The aim of this study was to develop a computer vision model to autonomously perform view classification on pediatric echocardiographic images. METHODS Using a training set of 12,067 echocardiographic images from patients aged 0 to 19 years, a convolutional neural network model was trained to identify 27 preselected standard pediatric echocardiographic views which included anatomic sweeps, color Doppler, and Doppler tracings. A validation set of 6,197 images was used for parameter tuning and model selection. A test set of 9,684 images from 100 different patients was then used to evaluate model accuracy. The model was also evaluated on a per study basis using a second test set consisting of 524 echocardiograms from children with leukemia to identify six preselected views pertinent to cardiac dysfunction surveillance. RESULTS The model identified the 27 preselected views with 90.3% accuracy. Accuracy was similar across age groups (89.3% for 0-4 years, 90.8% for 4-9 years, 90.0% for 9-14 years, and 91.2% for 14-19 years; P = .12). Examining the view subtypes, accuracy was 78.3% for the cine one location, 90.5% for sweeps with color Doppler, 82.2% for sweeps without color Doppler, and 91.1% for Doppler tracings. Among the leukemia cohort, the model identified the six preselected views on a per study basis with a positive predictive value of 98.7% to 99.2% and sensitivity of 76.9% to 94.8%. CONCLUSIONS A convolutional neural network model was constructed for view classification of pediatric echocardiograms that was accurate across the spectrum of ages and view types. This work lays the foundation for automated quantitative analysis and diagnostic support to promote efficient, accurate, and scalable analysis of pediatric echocardiograms.
Collapse
Affiliation(s)
- Addison Gearhart
- Department of Cardiology, Boston Children's Hospital, and Department of Pediatrics, Harvard Medical School, Boston, Massachusetts.
| | - Shinichi Goto
- One Brave Idea, Division of Cardiovascular Medicine, Brigham and Women's Hospital and Harvard Medical School, Boston, Massachusetts
| | - Rahul C Deo
- One Brave Idea, Division of Cardiovascular Medicine, Brigham and Women's Hospital and Harvard Medical School, Boston, Massachusetts
| | - Andrew J Powell
- Department of Cardiology, Boston Children's Hospital, and Department of Pediatrics, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
12
|
Madani M, Behzadi MM, Nabavi S. The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review. Cancers (Basel) 2022; 14:5334. [PMID: 36358753 PMCID: PMC9655692 DOI: 10.3390/cancers14215334] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 12/02/2022] Open
Abstract
Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, in 2020, more than 65% of breast cancer patients were diagnosed in an early stage of cancer, from which all survived. Although early detection is the most effective approach for cancer treatment, breast cancer screening conducted by radiologists is very expensive and time-consuming. More importantly, conventional methods of analyzing breast cancer images suffer from high false-detection rates. Different breast cancer imaging modalities are used to extract and analyze the key features affecting the diagnosis and treatment of breast cancer. These imaging modalities can be divided into subgroups such as mammograms, ultrasound, magnetic resonance imaging, histopathological images, or any combination of them. Radiologists or pathologists analyze images produced by these methods manually, which leads to an increase in the risk of wrong decisions for cancer detection. Thus, the utilization of new automatic methods to analyze all kinds of breast screening images to assist radiologists to interpret images is required. Recently, artificial intelligence (AI) has been widely utilized to automatically improve the early detection and treatment of different types of cancer, specifically breast cancer, thereby enhancing the survival chance of patients. Advances in AI algorithms, such as deep learning, and the availability of datasets obtained from various imaging modalities have opened an opportunity to surpass the limitations of current breast cancer analysis methods. In this article, we first review breast cancer imaging modalities, and their strengths and limitations. Then, we explore and summarize the most recent studies that employed AI in breast cancer detection using various breast imaging modalities. In addition, we report available datasets on the breast-cancer imaging modalities which are important in developing AI-based algorithms and training deep learning models. In conclusion, this review paper tries to provide a comprehensive resource to help researchers working in breast cancer imaging analysis.
Collapse
Affiliation(s)
- Mohammad Madani
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Mohammad Mahdi Behzadi
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|
13
|
Jones MA, Islam W, Faiz R, Chen X, Zheng B. Applying artificial intelligence technology to assist with breast cancer diagnosis and prognosis prediction. Front Oncol 2022; 12:980793. [PMID: 36119479 PMCID: PMC9471147 DOI: 10.3389/fonc.2022.980793] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 08/04/2022] [Indexed: 12/27/2022] Open
Abstract
Breast cancer remains the most diagnosed cancer in women. Advances in medical imaging modalities and technologies have greatly aided in the early detection of breast cancer and the decline of patient mortality rates. However, reading and interpreting breast images remains difficult due to the high heterogeneity of breast tumors and fibro-glandular tissue, which results in lower cancer detection sensitivity and specificity and large inter-reader variability. In order to help overcome these clinical challenges, researchers have made great efforts to develop computer-aided detection and/or diagnosis (CAD) schemes of breast images to provide radiologists with decision-making support tools. Recent rapid advances in high throughput data analysis methods and artificial intelligence (AI) technologies, particularly radiomics and deep learning techniques, have led to an exponential increase in the development of new AI-based models of breast images that cover a broad range of application topics. In this review paper, we focus on reviewing recent advances in better understanding the association between radiomics features and tumor microenvironment and the progress in developing new AI-based quantitative image feature analysis models in three realms of breast cancer: predicting breast cancer risk, the likelihood of tumor malignancy, and tumor response to treatment. The outlook and three major challenges of applying new AI-based models of breast images to clinical practice are also discussed. Through this review we conclude that although developing new AI-based models of breast images has achieved significant progress and promising results, several obstacles to applying these new AI-based models to clinical practice remain. Therefore, more research effort is needed in future studies.
Collapse
Affiliation(s)
- Meredith A. Jones
- School of Biomedical Engineering, University of Oklahoma, Norman, OK, United States
| | - Warid Islam
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, United States
| | - Rozwat Faiz
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, United States
| | - Xuxin Chen
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, United States
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, United States
| |
Collapse
|
14
|
Automatic Classification of Simulated Breast Tomosynthesis Whole Images for the Presence of Microcalcification Clusters Using Deep CNNs. J Imaging 2022; 8:jimaging8090231. [PMID: 36135397 PMCID: PMC9503015 DOI: 10.3390/jimaging8090231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 07/26/2022] [Accepted: 08/04/2022] [Indexed: 11/30/2022] Open
Abstract
Microcalcification clusters (MCs) are among the most important biomarkers for breast cancer, especially in cases of nonpalpable lesions. The vast majority of deep learning studies on digital breast tomosynthesis (DBT) are focused on detecting and classifying lesions, especially soft-tissue lesions, in small regions of interest previously selected. Only about 25% of the studies are specific to MCs, and all of them are based on the classification of small preselected regions. Classifying the whole image according to the presence or absence of MCs is a difficult task due to the size of MCs and all the information present in an entire image. A completely automatic and direct classification, which receives the entire image, without prior identification of any regions, is crucial for the usefulness of these techniques in a real clinical and screening environment. The main purpose of this work is to implement and evaluate the performance of convolutional neural networks (CNNs) regarding an automatic classification of a complete DBT image for the presence or absence of MCs (without any prior identification of regions). In this work, four popular deep CNNs are trained and compared with a new architecture proposed by us. The main task of these trainings was the classification of DBT cases by absence or presence of MCs. A public database of realistic simulated data was used, and the whole DBT image was taken into account as input. DBT data were considered without and with preprocessing (to study the impact of noise reduction and contrast enhancement methods on the evaluation of MCs with CNNs). The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance. Very promising results were achieved with a maximum AUC of 94.19% for the GoogLeNet. The second-best AUC value was obtained with a new implemented network, CNN-a, with 91.17%. This CNN had the particularity of also being the fastest, thus becoming a very interesting model to be considered in other studies. With this work, encouraging outcomes were achieved in this regard, obtaining similar results to other studies for the detection of larger lesions such as masses. Moreover, given the difficulty of visualizing the MCs, which are often spread over several slices, this work may have an important impact on the clinical analysis of DBT images.
Collapse
|
15
|
Thomassin-Naggara I, Ceugnart L, Tardivon A, Verzaux L, Balleyguier C, Taourel P, Seradour B. Intelligence artificielle : Place dans le dépistage du cancer du sein en France. Bull Cancer 2022; 109:780-785. [DOI: 10.1016/j.bulcan.2022.04.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 03/24/2022] [Accepted: 04/11/2022] [Indexed: 01/20/2023]
|
16
|
Cruz-Bastida JP, Pearson E, Al-Hallaq H. Toward understanding deep learning classification of anatomic sites: lessons from the development of a CBCT projection classifier. J Med Imaging (Bellingham) 2022; 9:045002. [PMID: 35903414 PMCID: PMC9311487 DOI: 10.1117/1.jmi.9.4.045002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 06/16/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: Deep learning (DL) applications strongly depend on the training dataset and convolutional neural network architecture; however, it is unclear how to objectively select such parameters. We investigate the classification performance of different DL models and training schemes for the anatomic classification of cone-beam computed tomography (CBCT) projections. Approach: CBCT scans from 1055 patients were collected and manually classified into five anatomic classes and used to develop DL models to predict the anatomic class from single x-ray projections. VGG-16, Xception, and Inception v3 architectures were trained with 75% of the data, and the remaining 25% was used for testing and evaluation. To study the dependence of the classification performance on dataset size, training data was downsampled to various dataset sizes. Gradient-weighted class activation maps (grad-CAM) were generated using the model with highest classification performance, to identify regions with strong influence on CNN decisions. Results: The highest precision and recall values were achieved with VGG-16. One of the best performing combinations was the VGG-16 trained with 90 deg projections (mean class precision = 0.87). The training dataset size could be reduced to ∼ 50 % of its initial size, without compromising the classification performance. For correctly classified cases, Grad-CAM were more heavily weighted for anatomically relevant regions. Conclusions: It was possible to determine those dependencies with a higher influence on the classification performance of DL models for the studied task. Grad-CAM enabled the identification of possible sources of class confusion.
Collapse
Affiliation(s)
- Juan P Cruz-Bastida
- University of Chicago, Department of Radiology, Chicago, Illinois, United States.,University of Chicago, Department of Radiation and Cellular Oncology, Chicago, Illinois, United States
| | - Erik Pearson
- University of Chicago, Department of Radiation and Cellular Oncology, Chicago, Illinois, United States
| | - Hania Al-Hallaq
- University of Chicago, Department of Radiation and Cellular Oncology, Chicago, Illinois, United States
| |
Collapse
|
17
|
Sreenivasu SVN, Gomathi S, Kumar MJ, Prathap L, Madduri A, Almutairi KMA, Alonazi WB, Kali D, Jayadhas SA. Dense Convolutional Neural Network for Detection of Cancer from CT Images. BIOMED RESEARCH INTERNATIONAL 2022; 2022:1293548. [PMID: 35769667 PMCID: PMC9236787 DOI: 10.1155/2022/1293548] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 04/17/2022] [Accepted: 04/23/2022] [Indexed: 11/17/2022]
Abstract
In this paper, we develop a detection module with strong training testing to develop a dense convolutional neural network model. The model is designed in such a way that it is trained with necessary features for optimal modelling of the cancer detection. The method involves preprocessing of computerized tomography (CT) images for optimal classification at the testing stages. A 10-fold cross-validation is conducted to test the reliability of the model for cancer detection. The experimental validation is conducted in python to validate the effectiveness of the model. The result shows that the model offers robust detection of cancer instances that novel approaches on large image datasets. The simulation result shows that the proposed method provides analyzes with 94% accuracy than other methods. Also, it helps to reduce the detection errors while classifying the cancer instances than other methods the several existing methods.
Collapse
Affiliation(s)
- S. V. N. Sreenivasu
- Department of Computer Science and Engineering, Narasaraopeta Engineering College, Narasaraopeta, Andhra Pradesh 522601, India
| | - S. Gomathi
- Department of Information Technology, Sri Sairam Engineering College, Chennai, Tamil Nadu 602109, India
| | - M. Jogendra Kumar
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh 522502, India
| | - Lavanya Prathap
- Department of Anatomy, Saveetha Dental College and Hospital, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamil Nadu 600077, India
| | - Abhishek Madduri
- Department of Engineering Management, Duke University, North Carolina 27708, USA
| | - Khalid M. A. Almutairi
- Department of Community Health Sciences, College of Applied Medical Sciences, King Saud University, P. O. Box: 10219, Riyadh-11433, Saudi Arabia
| | - Wadi B. Alonazi
- Health Administration Department, College of Business Administration, King Saud University, PO Box: 71115, Riyadh-11587, Saudi Arabia
| | - D. Kali
- Department of Mechanical Engineering, Ryerson University, Canada
| | | |
Collapse
|
18
|
Shape-Based Breast Lesion Classification Using Digital Tomosynthesis Images: The Role of Explainable Artificial Intelligence. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12126230] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Computer-aided diagnosis (CAD) systems can help radiologists in numerous medical tasks including classification and staging of the various diseases. The 3D tomosynthesis imaging technique adds value to the CAD systems in diagnosis and classification of the breast lesions. Several convolutional neural network (CNN) architectures have been proposed to classify the lesion shapes to the respective classes using a similar imaging method. However, not only is the black box nature of these CNN models questionable in the healthcare domain, but so is the morphological-based cancer classification, concerning the clinicians. As a result, this study proposes both a mathematically and visually explainable deep-learning-driven multiclass shape-based classification framework for the tomosynthesis breast lesion images. In this study, authors exploit eight pretrained CNN architectures for the classification task on the previously extracted regions of interests images containing the lesions. Additionally, the study also unleashes the black box nature of the deep learning models using two well-known perceptive explainable artificial intelligence (XAI) algorithms including Grad-CAM and LIME. Moreover, two mathematical-structure-based interpretability techniques, i.e., t-SNE and UMAP, are employed to investigate the pretrained models’ behavior towards multiclass feature clustering. The experimental results of the classification task validate the applicability of the proposed framework by yielding the mean area under the curve of 98.2%. The explanability study validates the applicability of all employed methods, mainly emphasizing the pros and cons of both Grad-CAM and LIME methods that can provide useful insights towards explainable CAD systems.
Collapse
|
19
|
|
20
|
Liang G, Greenwell C, Zhang Y, Xing X, Wang X, Kavuluru R, Jacobs N. Contrastive Cross-Modal Pre-Training: A General Strategy for Small Sample Medical Imaging. IEEE J Biomed Health Inform 2022; 26:1640-1649. [PMID: 34495856 PMCID: PMC9242687 DOI: 10.1109/jbhi.2021.3110805] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
A key challenge in training neural networks for a given medical imaging task is the difficulty of obtaining a sufficient number of manually labeled examples. In contrast, textual imaging reports are often readily available in medical records and contain rich but unstructured interpretations written by experts as part of standard clinical practice. We propose using these textual reports as a form of weak supervision to improve the image interpretation performance of a neural network without requiring additional manually labeled examples. We use an image-text matching task to train a feature extractor and then fine-tune it in a transfer learning setting for a supervised task using a small labeled dataset. The end result is a neural network that automatically interprets imagery without requiring textual reports during inference. We evaluate our method on three classification tasks and find consistent performance improvements, reducing the need for labeled data by 67%-98%.
Collapse
|
21
|
Li Y, He Z, Ma X, Zeng W, Liu J, Xu W, Xu Z, Wang S, Wen C, Zeng H, Wu J, Chen W, Lu Y. Architectural distortion detection based on superior-inferior directional context and anatomic prior knowledge in digital breast tomosynthesis. Med Phys 2022; 49:3749-3768. [PMID: 35338787 DOI: 10.1002/mp.15631] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Revised: 03/12/2022] [Accepted: 03/12/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND In 2020, breast cancer becomes the most leading diagnosed cancer all over the world. The burden is increasing in the prevention and treatment of breast cancer. Accurately detecting breast lesions in screening images is important for early detection of cancer. Architectural distortion (AD) is one of the breast lesions that need to be detected. PURPOSE To develop a deep-learning-based computer-aided detection (CADe) model for AD in digital breast tomosynthesis (DBT). This model uses the superior-inferior directional context of DBT and anatomic prior knowledge to reduce false positive (FP). It can identify some negative samples that cannot be distinguished by deep learning features. METHODS The proposed CADe model consists of three steps. In the first step, a deep learning detection network detects two-dimensional (2D) candidates of ADs in DBT slices with the inputs preprocessed by Gabor filters and convergence measure. In the second step, three-dimensional (3D) candidates are obtained by stacking 2D candidates along superior-inferior direction. In the last step, FP reduction for 3D candidates is implemented based on superior-inferior directional context and anatomic prior knowledge of breast. DBT data from 99 cases with AD were used as the training set to train the CADe model, and data from 208 cases were used as an independent test set (including 108 cases with AD and 100 cases without AD as the control group). The free-response receiver operating characteristic and mean true positive fraction (MTPF) in the range of 0.05-2.0 FPs per volume are used to evaluate the model. RESULTS Compared with the baseline model based on convergence measure, our proposed method demonstrates significant improvement (MTPF: 0.2826 ± 0.0321vs. 0.6640 ± 0.0399). Results of an ablation study show that our proposed context-based and anatomy-based FP reduction methods improve the detection performance. The number of FPs per DBT volume reduces from 2.47 to 1.66 at 80% sensitivity after employing these two schemes. CONCLUSIONS The deep learning model demonstrates practical value for AD detection. The results indicate that introducing superior-inferior directional context and anatomic prior knowledge into model can indeed reduce FPs and improve the performance of CADe model. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Yue Li
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, 510006, China.,Guangdong Province Key Laboratory of Computational Science, Sun Yat-Sen University, Guangzhou, 510006, China
| | - Zilong He
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Xiangyuan Ma
- Department of Biomedical Engineering, College of Engineering, Shantou University, Shantou, 515063, China.,Guangdong Province Key Laboratory of Computational Science, Sun Yat-Sen University, Guangzhou, 510006, China
| | - Weixiong Zeng
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Jialing Liu
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Weimin Xu
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Zeyuan Xu
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Sina Wang
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Chanjuan Wen
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Hui Zeng
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Jiefang Wu
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Weiguo Chen
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Yao Lu
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, 510006, China.,Guangdong Province Key Laboratory of Computational Science, Sun Yat-Sen University, Guangzhou, 510006, China
| |
Collapse
|
22
|
Liang G, Ganesh H, Steffe D, Liu L, Jacobs N, Zhang J. Development of CNN models for the enteral feeding tube positioning assessment on a small scale data set. BMC Med Imaging 2022; 22:52. [PMID: 35317725 PMCID: PMC8939093 DOI: 10.1186/s12880-022-00766-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 02/17/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Enteral nutrition through feeding tubes serves as the primary method of nutritional supplementation for patients unable to feed themselves. Plain radiographs are routinely used to confirm the position of the Nasoenteric feeding tubes the following insertion and before the commencement of tube feeds. Convolutional neural networks (CNNs) have shown encouraging results in assisting the tube positioning assessment. However, robust CNNs are often trained using large amounts of manually annotated data, which challenges applying CNNs on enteral feeding tube positioning assessment. METHOD We build a CNN model for feeding tube positioning assessment by pre-training the model under a weakly supervised fashion on large quantities of radiographs. Since most of the model was pre-trained, a small amount of labeled data is needed when fine-tuning the model for tube positioning assessment. We demonstrate the proposed method using a small dataset with 175 radiographs. RESULT The experimental result shows that the proposed model improves the area under the receiver operating characteristic curve (AUC) by up to 35.71% , from 0.56 to 0.76, and 14.49% on the accuracy, from 0.69 to 0.79 when compared with the no pre-trained method. The proposed method also has up to 40% less error when estimating its prediction confidence. CONCLUSION Our evaluation results show that the proposed model has a high prediction accuracy and a more accurate estimated prediction confidence when compared to the no pre-trained model and other baseline models. The proposed method can be potentially used for assessing the enteral tube positioning. It also provides a strong baseline for future studies.
Collapse
Affiliation(s)
| | | | | | | | | | - Jie Zhang
- University of Kentucky, Lexington, KY, USA.
| |
Collapse
|
23
|
Jones MA, Faiz R, Qiu Y, Zheng B. Improving mammography lesion classification by optimal fusion of handcrafted and deep transfer learning features. Phys Med Biol 2022; 67:10.1088/1361-6560/ac5297. [PMID: 35130517 PMCID: PMC8935657 DOI: 10.1088/1361-6560/ac5297] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 02/07/2022] [Indexed: 12/20/2022]
Abstract
Objective.Handcrafted radiomics features or deep learning model-generated automated features are commonly used to develop computer-aided diagnosis schemes of medical images. The objective of this study is to test the hypothesis that handcrafted and automated features contain complementary classification information and fusion of these two types of features can improve CAD performance.Approach.We retrospectively assembled a dataset involving 1535 lesions (740 malignant and 795 benign). Regions of interest (ROI) surrounding suspicious lesions are extracted and two types of features are computed from each ROI. The first one includes 40 radiomic features and the second one includes automated features computed from a VGG16 network using a transfer learning method. A single channel ROI image is converted to three channel pseudo-ROI images by stacking the original image, a bilateral filtered image, and a histogram equalized image. Two VGG16 models using pseudo-ROIs and 3 stacked original ROIs without pre-processing are used to extract automated features. Five linear support vector machines (SVM) are built using the optimally selected feature vectors from the handcrafted features, two sets of VGG16 model-generated automated features, and the fusion of handcrafted and each set of automated features, respectively.Main Results.Using a 10-fold cross-validation, the fusion SVM using pseudo-ROIs yields the highest lesion classification performance with area under ROC curve (AUC = 0.756 ± 0.042), which is significantly higher than those yielded by other SVMs trained using handcrafted or automated features only (p < 0.05).Significance.This study demonstrates that both handcrafted and automated futures contain useful information to classify breast lesions. Fusion of these two types of features can further increase CAD performance.
Collapse
Affiliation(s)
- Meredith A. Jones
- School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Rowzat Faiz
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| |
Collapse
|
24
|
Automated Segmentation of Mass Regions in DBT Images Using a Dilated DCNN Approach. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9082694. [PMID: 35154309 PMCID: PMC8828338 DOI: 10.1155/2022/9082694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 01/02/2022] [Accepted: 01/15/2022] [Indexed: 11/25/2022]
Abstract
To overcome the limitations of conventional breast screening methods based on digital mammography, a quasi-3D imaging technique, digital breast tomosynthesis (DBT) has been developed in the field of breast cancer screening in recent years. In this work, a computer-aided architecture for mass regions segmentation in DBT images using a dilated deep convolutional neural network (DCNN) is developed. First, to improve the low contrast of breast tumour candidate regions and depress the background tissue noise in the DBT image effectively, the constraint matrix is established after top-hat transformation and multiplied with the DBT image. Second, input image patches are generated, and the data augmentation technique is performed to create the training data set for training a dilated DCNN architecture. Then the mass regions in DBT images are preliminarily segmented; each pixel is divided into two different kinds of labels. Finally, the postprocessing procedure removes all false-positives regions with less than 50 voxels. The final segmentation results are obtained by smoothing the boundaries of the mass regions with a median filter. The testing accuracy (ACC), sensitivity (SEN), and the area under the receiver operating curve (AUC) are adopted as the evaluation metrics, and the ACC, SEN, as well as AUC are 86.3%, 85.6%, and 0.852 for segmenting the mass regions in DBT images on the entire data set, respectively. The experimental results indicate that our proposed approach achieves promising results compared with other classical CAD-based frameworks.
Collapse
|
25
|
Mahmood T, Li J, Pei Y, Akhtar F, Rehman MU, Wasti SH. Breast lesions classifications of mammographic images using a deep convolutional neural network-based approach. PLoS One 2022; 17:e0263126. [PMID: 35085352 PMCID: PMC8794221 DOI: 10.1371/journal.pone.0263126] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 01/12/2022] [Indexed: 11/18/2022] Open
Abstract
Breast cancer is one of the worst illnesses, with a higher fatality rate among women globally. Breast cancer detection needs accurate mammography interpretation and analysis, which is challenging for radiologists owing to the intricate anatomy of the breast and low image quality. Advances in deep learning-based models have significantly improved breast lesions’ detection, localization, risk assessment, and categorization. This study proposes a novel deep learning-based convolutional neural network (ConvNet) that significantly reduces human error in diagnosing breast malignancy tissues. Our methodology is most effective in eliciting task-specific features, as feature learning is coupled with classification tasks to achieve higher performance in automatically classifying the suspicious regions in mammograms as benign and malignant. To evaluate the model’s validity, 322 raw mammogram images from Mammographic Image Analysis Society (MIAS) and 580 from Private datasets were obtained to extract in-depth features, the intensity of information, and the high likelihood of malignancy. Both datasets are magnificently improved through preprocessing, synthetic data augmentation, and transfer learning techniques to attain the distinctive combination of breast tumors. The experimental findings indicate that the proposed approach achieved remarkable training accuracy of 0.98, test accuracy of 0.97, high sensitivity of 0.99, and an AUC of 0.99 in classifying breast masses on mammograms. The developed model achieved promising performance that helps the clinician in the speedy computation of mammography, breast masses diagnosis, treatment planning, and follow-up of disease progression. Moreover, it has the immense potential over retrospective approaches in consistency feature extraction and precise lesions classification.
Collapse
Affiliation(s)
- Tariq Mahmood
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Division of Science and Technology, Department of Information Sciences, University of Education, Lahore, Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu, Fukushima, Japan
- * E-mail:
| | - Faheem Akhtar
- Department of Computer Science, Sukkur IBA University, Sukkur, Pakistan
| | - Mujeeb Ur Rehman
- Radiology Department, Continental Medical College and Hayat Memorial Teaching Hospital, Lahore, Pakistan
| | - Shahbaz Hassan Wasti
- Division of Science and Technology, Department of Information Sciences, University of Education, Lahore, Pakistan
| |
Collapse
|
26
|
Automated Breast Cancer Detection Models Based on Transfer Learning. SENSORS 2022; 22:s22030876. [PMID: 35161622 PMCID: PMC8838322 DOI: 10.3390/s22030876] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 12/28/2021] [Accepted: 01/19/2022] [Indexed: 02/06/2023]
Abstract
Breast cancer is among the leading causes of mortality for females across the planet. It is essential for the well-being of women to develop early detection and diagnosis techniques. In mammography, focus has contributed to the use of deep learning (DL) models, which have been utilized by radiologists to enhance the needed processes to overcome the shortcomings of human observers. The transfer learning method is being used to distinguish malignant and benign breast cancer by fine-tuning multiple pre-trained models. In this study, we introduce a framework focused on the principle of transfer learning. In addition, a mixture of augmentation strategies were used to prevent overfitting and produce stable outcomes by increasing the number of mammographic images; including several rotation combinations, scaling, and shifting. On the Mammographic Image Analysis Society (MIAS) dataset, the proposed system was evaluated and achieved an accuracy of 89.5% using (residual network-50) ResNet50, and achieved an accuracy of 70% using the Nasnet-Mobile network. The proposed system demonstrated that pre-trained classification networks are significantly more effective and efficient, making them more acceptable for medical imaging, particularly for small training datasets.
Collapse
|
27
|
Shah SM, Khan RA, Arif S, Sajid U. Artificial intelligence for breast cancer analysis: Trends & directions. Comput Biol Med 2022; 142:105221. [PMID: 35016100 DOI: 10.1016/j.compbiomed.2022.105221] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 12/18/2022]
Abstract
Breast cancer is one of the leading causes of death among women. Early detection of breast cancer can significantly improve the lives of millions of women across the globe. Given importance of finding solution/framework for early detection and diagnosis, recently many AI researchers are focusing to automate this task. The other reasons for surge in research activities in this direction are advent of robust AI algorithms (deep learning), availability of hardware that can run/train those robust and complex AI algorithms and accessibility of large enough dataset required for training AI algorithms. Different imaging modalities that have been exploited by researchers to automate the task of breast cancer detection are mammograms, ultrasound, magnetic resonance imaging, histopathological images or any combination of them. This article analyzes these imaging modalities and presents their strengths and limitations. It also enlists resources from where their datasets can be accessed for research purpose. This article then summarizes AI and computer vision based state-of-the-art methods proposed in the last decade to detect breast cancer using various imaging modalities. Primarily, in this article we have focused on reviewing frameworks that have reported results using mammograms as it is the most widely used breast imaging modality that serves as the first test that medical practitioners usually prescribe for the detection of breast cancer. Another reason for focusing on mammogram imaging modalities is the availability of its labelled datasets. Datasets availability is one of the most important aspects for the development of AI based frameworks as such algorithms are data hungry and generally quality of dataset affects performance of AI based algorithms. In a nutshell, this research article will act as a primary resource for the research community working in the field of automated breast imaging analysis.
Collapse
Affiliation(s)
- Shahid Munir Shah
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Rizwan Ahmed Khan
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan.
| | - Sheeraz Arif
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Unaiza Sajid
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| |
Collapse
|
28
|
Deep hybrid architectures for binary classification of medical breast cancer images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103226] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
29
|
Mridha MF, Hamid MA, Monowar MM, Keya AJ, Ohi AQ, Islam MR, Kim JM. A Comprehensive Survey on Deep-Learning-Based Breast Cancer Diagnosis. Cancers (Basel) 2021; 13:6116. [PMID: 34885225 PMCID: PMC8656730 DOI: 10.3390/cancers13236116] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 11/25/2021] [Accepted: 12/01/2021] [Indexed: 12/11/2022] Open
Abstract
Breast cancer is now the most frequently diagnosed cancer in women, and its percentage is gradually increasing. Optimistically, there is a good chance of recovery from breast cancer if identified and treated at an early stage. Therefore, several researchers have established deep-learning-based automated methods for their efficiency and accuracy in predicting the growth of cancer cells utilizing medical imaging modalities. As of yet, few review studies on breast cancer diagnosis are available that summarize some existing studies. However, these studies were unable to address emerging architectures and modalities in breast cancer diagnosis. This review focuses on the evolving architectures of deep learning for breast cancer detection. In what follows, this survey presents existing deep-learning-based architectures, analyzes the strengths and limitations of the existing studies, examines the used datasets, and reviews image pre-processing techniques. Furthermore, a concrete review of diverse imaging modalities, performance metrics and results, challenges, and research directions for future researchers is presented.
Collapse
Affiliation(s)
- Muhammad Firoz Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Abdul Hamid
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Muhammad Mostafa Monowar
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Ashfia Jannat Keya
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Abu Quwsar Ohi
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh;
| | - Jong-Myon Kim
- Department of Electrical, Electronics, and Computer Engineering, University of Ulsan, Ulsan 680-749, Korea
| |
Collapse
|
30
|
Buda M, Saha A, Walsh R, Ghate S, Li N, Święcicki A, Lo JY, Mazurowski MA. A Data Set and Deep Learning Algorithm for the Detection of Masses and Architectural Distortions in Digital Breast Tomosynthesis Images. JAMA Netw Open 2021; 4:e2119100. [PMID: 34398205 PMCID: PMC8369362 DOI: 10.1001/jamanetworkopen.2021.19100] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
IMPORTANCE Breast cancer screening is among the most common radiological tasks, with more than 39 million examinations performed each year. While it has been among the most studied medical imaging applications of artificial intelligence, the development and evaluation of algorithms are hindered by the lack of well-annotated, large-scale publicly available data sets. OBJECTIVES To curate, annotate, and make publicly available a large-scale data set of digital breast tomosynthesis (DBT) images to facilitate the development and evaluation of artificial intelligence algorithms for breast cancer screening; to develop a baseline deep learning model for breast cancer detection; and to test this model using the data set to serve as a baseline for future research. DESIGN, SETTING, AND PARTICIPANTS In this diagnostic study, 16 802 DBT examinations with at least 1 reconstruction view available, performed between August 26, 2014, and January 29, 2018, were obtained from Duke Health System and analyzed. From the initial cohort, examinations were divided into 4 groups and split into training and test sets for the development and evaluation of a deep learning model. Images with foreign objects or spot compression views were excluded. Data analysis was conducted from January 2018 to October 2020. EXPOSURES Screening DBT. MAIN OUTCOMES AND MEASURES The detection algorithm was evaluated with breast-based free-response receiver operating characteristic curve and sensitivity at 2 false positives per volume. RESULTS The curated data set contained 22 032 reconstructed DBT volumes that belonged to 5610 studies from 5060 patients with a mean (SD) age of 55 (11) years and 5059 (100.0%) women. This included 4 groups of studies: (1) 5129 (91.4%) normal studies; (2) 280 (5.0%) actionable studies, for which where additional imaging was needed but no biopsy was performed; (3) 112 (2.0%) benign biopsied studies; and (4) 89 studies (1.6%) with cancer. Our data set included masses and architectural distortions that were annotated by 2 experienced radiologists. Our deep learning model reached breast-based sensitivity of 65% (39 of 60; 95% CI, 56%-74%) at 2 false positives per DBT volume on a test set of 460 examinations from 418 patients. CONCLUSIONS AND RELEVANCE The large, diverse, and curated data set presented in this study could facilitate the development and evaluation of artificial intelligence algorithms for breast cancer screening by providing data for training as well as a common set of cases for model validation. The performance of the model developed in this study showed that the task remains challenging; its performance could serve as a baseline for future model development.
Collapse
Affiliation(s)
- Mateusz Buda
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
| | - Ashirbani Saha
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
| | - Ruth Walsh
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
| | - Sujata Ghate
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
| | - Nianyi Li
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
| | - Albert Święcicki
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
| | - Joseph Y. Lo
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
| | - Maciej A. Mazurowski
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
- Department of Computer Science, Duke University, Durham, North Carolina
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, North Carolina
| |
Collapse
|
31
|
Yoon JH, Kim EK. Deep Learning-Based Artificial Intelligence for Mammography. Korean J Radiol 2021; 22:1225-1239. [PMID: 33987993 PMCID: PMC8316774 DOI: 10.3348/kjr.2020.1210] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 01/11/2021] [Accepted: 01/17/2021] [Indexed: 12/27/2022] Open
Abstract
During the past decade, researchers have investigated the use of computer-aided mammography interpretation. With the application of deep learning technology, artificial intelligence (AI)-based algorithms for mammography have shown promising results in the quantitative assessment of parenchymal density, detection and diagnosis of breast cancer, and prediction of breast cancer risk, enabling more precise patient management. AI-based algorithms may also enhance the efficiency of the interpretation workflow by reducing both the workload and interpretation time. However, more in-depth investigation is required to conclusively prove the effectiveness of AI-based algorithms. This review article discusses how AI algorithms can be applied to mammography interpretation as well as the current challenges in its implementation in real-world practice.
Collapse
Affiliation(s)
- Jung Hyun Yoon
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Seoul, Korea
| | - Eun Kyung Kim
- Department of Radiology, Yongin Severance Hospital, Yonsei University, College of Medicine, Yongin, Korea.
| |
Collapse
|
32
|
Bai J, Posner R, Wang T, Yang C, Nabavi S. Applying deep learning in digital breast tomosynthesis for automatic breast cancer detection: A review. Med Image Anal 2021; 71:102049. [PMID: 33901993 DOI: 10.1016/j.media.2021.102049] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 02/11/2021] [Accepted: 03/19/2021] [Indexed: 02/07/2023]
Abstract
The relatively recent reintroduction of deep learning has been a revolutionary force in the interpretation of diagnostic imaging studies. However, the technology used to acquire those images is undergoing a revolution itself at the very same time. Digital breast tomosynthesis (DBT) is one such technology, which has transformed the field of breast imaging. DBT, a form of three-dimensional mammography, is rapidly replacing the traditional two-dimensional mammograms. These parallel developments in both the acquisition and interpretation of breast images present a unique case study in how modern AI systems can be designed to adapt to new imaging methods. They also present a unique opportunity for co-development of both technologies that can better improve the validity of results and patient outcomes. In this review, we explore the ways in which deep learning can be best integrated into breast cancer screening workflows using DBT. We first explain the principles behind DBT itself and why it has become the gold standard in breast screening. We then survey the foundations of deep learning methods in diagnostic imaging, and review the current state of research into AI-based DBT interpretation. Finally, we present some of the limitations of integrating AI into clinical practice and the opportunities these present in this burgeoning field.
Collapse
Affiliation(s)
- Jun Bai
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT 06269, USA
| | - Russell Posner
- University of Connecticut School of Medicine, 263 Farmington Ave. Farmington, CT 06030, USA
| | - Tianyu Wang
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT 06269, USA
| | - Clifford Yang
- University of Connecticut School of Medicine, 263 Farmington Ave. Farmington, CT 06030, USA; Department of Radiology, UConn Health, 263 Farmington Ave. Farmington, CT 06030, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT 06269, USA.
| |
Collapse
|
33
|
Wang J, Zhu H, Wang SH, Zhang YD. A Review of Deep Learning on Medical Image Analysis. MOBILE NETWORKS AND APPLICATIONS 2021; 26:351-380. [DOI: 10.1007/s11036-020-01672-7] [Citation(s) in RCA: 51] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/20/2020] [Indexed: 08/30/2023]
|
34
|
A framework for breast cancer classification using Multi-DCNNs. Comput Biol Med 2021; 131:104245. [PMID: 33556893 DOI: 10.1016/j.compbiomed.2021.104245] [Citation(s) in RCA: 45] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 01/22/2021] [Accepted: 01/23/2021] [Indexed: 11/20/2022]
Abstract
BACKGROUND Deep learning (DL) is the fastest-growing field of machine learning (ML). Deep convolutional neural networks (DCNN) are currently the main tool used for image analysis and classification purposes. There are several DCNN architectures among them AlexNet, GoogleNet, and residual networks (ResNet). METHOD This paper presents a new computer-aided diagnosis (CAD) system based on feature extraction and classification using DL techniques to help radiologists to classify breast cancer lesions in mammograms. This is performed by four different experiments to determine the optimum approach. The first one consists of end-to-end pre-trained fine-tuned DCNN networks. In the second one, the deep features of the DCNNs are extracted and fed to a support vector machine (SVM) classifier with different kernel functions. The third experiment performs deep features fusion to demonstrate that combining deep features will enhance the accuracy of the SVM classifiers. Finally, in the fourth experiment, principal component analysis (PCA) is introduced to reduce the large feature vector produced in feature fusion and to decrease the computational cost. The experiments are performed on two datasets (1) the curated breast imaging subset of the digital database for screening mammography (CBIS-DDSM) and (2) the mammographic image analysis society digital mammogram database (MIAS). RESULTS The accuracy achieved using deep features fusion for both datasets proved to be the highest compared to the state-of-the-art CAD systems. Conversely, when applying the PCA on the feature fusion sets, the accuracy did not improve; however, the computational cost decreased as the execution time decreased.
Collapse
|
35
|
Zerouaoui H, Idri A. Reviewing Machine Learning and Image Processing Based Decision-Making Systems for Breast Cancer Imaging. J Med Syst 2021; 45:8. [PMID: 33404910 DOI: 10.1007/s10916-020-01689-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 12/01/2020] [Indexed: 01/11/2023]
Abstract
Breast cancer (BC) is the leading cause of death among women worldwide. It affects in general women older than 40 years old. Medical images analysis is one of the most promising research areas since it provides facilities for diagnosis and decision-making of several diseases such as BC. This paper conducts a Structured Literature Review (SLR) of the use of Machine Learning (ML) and Image Processing (IP) techniques to deal with BC imaging. A set of 530 papers published between 2000 and August 2019 were selected and analyzed according to ten criteria: year and publication channel, empirical type, research type, medical task, machine learning techniques, datasets used, validation methods, performance measures and image processing techniques which include image pre-processing, segmentation, feature extraction and feature selection. Results showed that diagnosis was the most used medical task and that Deep Learning techniques (DL) were largely used to perform classification. Furthermore, we found out that classification was the most ML objective investigated followed by prediction and clustering. Most of the selected studies used Mammograms as imaging modalities rather than Ultrasound or Magnetic Resonance Imaging with the use of public or private datasets with MIAS as the most frequently investigated public dataset. As for image processing techniques, the majority of the selected studies pre-process their input images by reducing the noise and normalizing the colors, and some of them use segmentation to extract the region of interest with the thresholding method. For feature extraction, we note that researchers extracted the relevant features using classical feature extraction techniques (e.g. Texture features, Shape features, etc.) or DL techniques (e. g. VGG16, VGG19, ResNet, etc.), and finally few papers used feature selection techniques in particular the filter methods.
Collapse
Affiliation(s)
- Hasnae Zerouaoui
- Modeling, Simulation and Data Analysis, Mohammed VI Polytechnic University, Benguerir, Morocco
| | - Ali Idri
- Modeling, Simulation and Data Analysis, Mohammed VI Polytechnic University, Benguerir, Morocco. .,Software Project Management Research Team, ENSIAS, Mohammed V University in Rabat, Rabat, Morocco.
| |
Collapse
|
36
|
Identifying multimodal signatures underlying the somatic comorbidity of psychosis: the COMMITMENT roadmap. Mol Psychiatry 2021; 26:722-724. [PMID: 33060817 PMCID: PMC7910206 DOI: 10.1038/s41380-020-00915-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 09/16/2020] [Accepted: 10/02/2020] [Indexed: 11/25/2022]
|
37
|
Wang L, Zheng C, Chen W, He Q, Li X, Zhang S, Qin G, Chen W, Wei J, Xie P, Zhou L, Wang X, Zhen X. Multi-path synergic fusion deep neural network framework for breast mass classification using digital breast tomosynthesis. ACTA ACUST UNITED AC 2020; 65:235045. [DOI: 10.1088/1361-6560/abaeb7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
38
|
Sechopoulos I, Teuwen J, Mann R. Artificial intelligence for breast cancer detection in mammography and digital breast tomosynthesis: State of the art. Semin Cancer Biol 2020; 72:214-225. [PMID: 32531273 DOI: 10.1016/j.semcancer.2020.06.002] [Citation(s) in RCA: 81] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 05/19/2020] [Accepted: 06/01/2020] [Indexed: 02/07/2023]
Abstract
Screening for breast cancer with mammography has been introduced in various countries over the last 30 years, initially using analog screen-film-based systems and, over the last 20 years, transitioning to the use of fully digital systems. With the introduction of digitization, the computer interpretation of images has been a subject of intense interest, resulting in the introduction of computer-aided detection (CADe) and diagnosis (CADx) algorithms in the early 2000's. Although they were introduced with high expectations, the potential improvement in the clinical realm failed to materialize, mostly due to the high number of false positive marks per analyzed image. In the last five years, the artificial intelligence (AI) revolution in computing, driven mostly by deep learning and convolutional neural networks, has also pervaded the field of automated breast cancer detection in digital mammography and digital breast tomosynthesis. Research in this area first involved comparison of its capabilities to that of conventional CADe/CADx methods, which quickly demonstrated the potential of this new technology. In the last couple of years, more mature and some commercial products have been developed, and studies of their performance compared to that of experienced breast radiologists are showing that these algorithms are on par with human-performance levels in retrospective data sets. Although additional studies, especially prospective evaluations performed in the real screening environment, are needed, it is becoming clear that AI will have an important role in the future breast cancer screening realm. Exactly how this new player will shape this field remains to be determined, but recent studies are already evaluating different options for implementation of this technology. The aim of this review is to provide an overview of the basic concepts and developments in the field AI for breast cancer detection in digital mammography and digital breast tomosynthesis. The pitfalls of conventional methods, and how these are, for the most part, avoided by this new technology, will be discussed. Importantly, studies that have evaluated the current capabilities of AI and proposals for how these capabilities should be leveraged in the clinical realm will be reviewed, while the questions that need to be answered before this vision becomes a reality are posed.
Collapse
Affiliation(s)
- Ioannis Sechopoulos
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, the Netherlands; Dutch Expert Centre for Screening (LRCB), Wijchenseweg 101, 6538 SW, Nijmegen, the Netherlands.
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, the Netherlands; Department of Radiation Oncology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, 1066 CX, Amsterdam, the Netherlands.
| | - Ritse Mann
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, the Netherlands; Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, 1066 CX, Amsterdam, the Netherlands.
| |
Collapse
|
39
|
Samala RK, Chan HP, Hadjiiski LM, Helvie MA, Richter CD. Generalization error analysis for deep convolutional neural network with transfer learning in breast cancer diagnosis. Phys Med Biol 2020; 65:105002. [PMID: 32208369 DOI: 10.1088/1361-6560/ab82e8] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Deep convolutional neural network (DCNN), now popularly called artificial intelligence (AI), has shown the potential to improve over previous computer-assisted tools in medical imaging developed in the past decades. A DCNN has millions of free parameters that need to be trained, but the training sample set is limited in size for most medical imaging tasks so that transfer learning is typically used. Automatic data mining may be an efficient way to enlarge the collected data set but the data can be noisy such as incorrect labels or even a wrong type of image. In this work we studied the generalization error of DCNN with transfer learning in medical imaging for the task of classifying malignant and benign masses on mammograms. With a finite available data set, we simulated a training set containing corrupted data or noisy labels. The balance between learning and memorization of the DCNN was manipulated by varying the proportion of corrupted data in the training set. The generalization error of DCNN was analyzed by the area under the receiver operating characteristic curve for the training and test sets and the weight changes after transfer learning. The study demonstrates that the transfer learning strategy of DCNN for such tasks needs to be designed properly, taking into consideration the constraints of the available training set having limited size and quality for the classification task at hand, to minimize memorization and improve generalizability.
Collapse
Affiliation(s)
- Ravi K Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109-5842, United States of America
| | | | | | | | | |
Collapse
|
40
|
Chan HP, Samala RK, Hadjiiski LM. CAD and AI for breast cancer-recent development and challenges. Br J Radiol 2020; 93:20190580. [PMID: 31742424 PMCID: PMC7362917 DOI: 10.1259/bjr.20190580] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 11/13/2019] [Accepted: 11/17/2019] [Indexed: 12/15/2022] Open
Abstract
Computer-aided diagnosis (CAD) has been a popular area of research and development in the past few decades. In CAD, machine learning methods and multidisciplinary knowledge and techniques are used to analyze the patient information and the results can be used to assist clinicians in their decision making process. CAD may analyze imaging information alone or in combination with other clinical data. It may provide the analyzed information directly to the clinician or correlate the analyzed results with the likelihood of certain diseases based on statistical modeling of the past cases in the population. CAD systems can be developed to provide decision support for many applications in the patient care processes, such as lesion detection, characterization, cancer staging, treatment planning and response assessment, recurrence and prognosis prediction. The new state-of-the-art machine learning technique, known as deep learning (DL), has revolutionized speech and text recognition as well as computer vision. The potential of major breakthrough by DL in medical image analysis and other CAD applications for patient care has brought about unprecedented excitement of applying CAD, or artificial intelligence (AI), to medicine in general and to radiology in particular. In this paper, we will provide an overview of the recent developments of CAD using DL in breast imaging and discuss some challenges and practical issues that may impact the advancement of artificial intelligence and its integration into clinical workflow.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | - Ravi K. Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | | |
Collapse
|
41
|
Wang X, Liang G, Zhang Y, Blanton H, Bessinger Z, Jacobs N. Inconsistent Performance of Deep Learning Models on Mammogram Classification. J Am Coll Radiol 2020; 17:796-803. [PMID: 32068005 DOI: 10.1016/j.jacr.2020.01.006] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Revised: 01/10/2020] [Accepted: 01/12/2020] [Indexed: 12/13/2022]
Abstract
OBJECTIVES Performance of recently developed deep learning models for image classification surpasses that of radiologists. However, there are questions about model performance consistency and generalization in unseen external data. The purpose of this study is to determine whether the high performance of deep learning on mammograms can be transferred to external data with a different data distribution. MATERIALS AND METHODS Six deep learning models (three published models with high performance and three models designed by us) were evaluated on four different mammogram data sets, including three public (Digital Database for Screening Mammography, INbreast, and Mammographic Image Analysis Society) and one private data set (UKy). The models were trained and validated on either Digital Database for Screening Mammography alone or a combined data set that included Digital Database for Screening Mammography. The models were then tested on the three external data sets. The area under the receiver operating characteristic curve (auROC) was used to evaluate model performance. RESULTS The three published models reported validation auROC scores between 0.88 and 0.95 on the validation data set. Our models achieved between 0.71 (95% confidence interval [CI]: 0.70-0.72) and 0.79 (95% CI: 0.78-0.80) auROC on the same validation data set. However, the same evaluation criteria of all six models on the three external test data sets were significantly decreased, only between 0.44 (95% CI: 0.43-0.45) and 0.65 (95% CI: 0.64-0.66). CONCLUSION Our results demonstrate performance inconsistency across the data sets and models, indicating that the high performance of deep learning models on one data set cannot be readily transferred to unseen external data sets, and these models need further assessment and validation before being applied in clinical practice.
Collapse
Affiliation(s)
- Xiaoqin Wang
- Department of Radiology, University of Kentucky, Lexington, Kentucky; Markey Cancer Center, University of Kentucky, Lexington, Kentucky.
| | - Gongbo Liang
- Department of Computer Science, University of Kentucky, Lexington, Kentucky
| | - Yu Zhang
- Department of Computer Science, University of Kentucky, Lexington, Kentucky
| | - Hunter Blanton
- Department of Computer Science, University of Kentucky, Lexington, Kentucky
| | - Zachary Bessinger
- Department of Computer Science, University of Kentucky, Lexington, Kentucky
| | - Nathan Jacobs
- Department of Computer Science, University of Kentucky, Lexington, Kentucky
| |
Collapse
|
42
|
Lai X, Yang W, Li R. DBT Masses Automatic Segmentation Using U-Net Neural Networks. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:7156165. [PMID: 32411285 PMCID: PMC7204342 DOI: 10.1155/2020/7156165] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 12/17/2019] [Accepted: 12/18/2019] [Indexed: 12/02/2022]
Abstract
To improve the automatic segmentation accuracy of breast masses in digital breast tomosynthesis (DBT) images, we propose a DBT mass automatic segmentation algorithm by using a U-Net architecture. Firstly, to suppress the background tissue noise and enhance the contrast of the mass candidate regions, after the top-hat transform of DBT images, a constraint matrix is constructed and multiplied with the DBT image. Secondly, an efficient U-Net neural network is built and image patches are extracted before data augmentation to establish the training dataset to train the U-Net model. And then the presegmentation of the DBT tumors is implemented, which initially classifies per pixel into two different types of labels. Finally, all regions smaller than 50 voxels considered as false positives are removed, and the median filter smoothes the mass boundaries to obtain the final segmentation results. The proposed method can effectively improve the performance in the automatic segmentation of the masses in DBT images. Using the detection Accuracy (Acc), Sensitivity (Sen), Specificity (Spe), and area under the curve (AUC) as evaluation indexes, the Acc, Sen, Spe, and AUC for DBT mass segmentation in the entire experimental dataset is 0.871, 0.869, 0.882, and 0.859, respectively. Our proposed U-Net-based DBT mass automatic segmentation system obtains promising results, which is superior to some classical architectures, and may be expected to have clinical application prospects.
Collapse
Affiliation(s)
- Xiaobo Lai
- College of Medical Technology, Zhejiang Chinese Medical University, Hangzhou 310053, China
| | - Weiji Yang
- College of Life Science, Zhejiang Chinese Medical University, Hangzhou 310053, China
| | - Ruipeng Li
- Hangzhou Third People's Hospital, Hangzhou 310009, China
| |
Collapse
|
43
|
Whitney HM, Li H, Ji Y, Liu P, Giger ML. Comparison of Breast MRI Tumor Classification Using Human-Engineered Radiomics, Transfer Learning From Deep Convolutional Neural Networks, and Fusion Methods. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:163-177. [PMID: 34045769 PMCID: PMC8152568 DOI: 10.1109/jproc.2019.2950187] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Digital image-based signatures of breast tumors may ultimately contribute to the design of patient-specific breast cancer diagnostics and treatments. Beyond traditional human-engineered computer vision methods, tumor classification methods using transfer learning from deep convolutional neural networks (CNNs) are actively under development. This article will first discuss our progress in using CNN-based transfer learning to characterize breast tumors for various diagnostic, prognostic, or predictive image-based tasks across multiple imaging modalities, including mammography, digital breast tomosynthesis, ultrasound (US), and magnetic resonance imaging (MRI), compared to both human-engineered feature-based radiomics and fusion classifiers created through combination of such features. Second, a new study is presented that reports on a comprehensive comparison of the classification performances of features derived from human-engineered radiomic features, CNN transfer learning, and fusion classifiers for breast lesions imaged with MRI. These studies demonstrate the utility of transfer learning for computer-aided diagnosis and highlight the synergistic improvement in classification performance using fusion classifiers.
Collapse
Affiliation(s)
- Heather M Whitney
- Department of Radiology, The University of Chicago, Chicago, IL 60637 USA, and also with the Department of Physics, Wheaton College, Wheaton, IL 60187 USA
| | - Hui Li
- Department of Radiology, The University of Chicago, Chicago, IL 60637 USA
| | - Yu Ji
- Department of Breast Imaging, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin Medical University, Tianjin 30060, China
| | - Peifang Liu
- Department of Breast Imaging, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin Medical University, Tianjin 30060, China
| | - Maryellen L Giger
- Department of Radiology, The University of Chicago, Chicago, IL 60637 USA
| |
Collapse
|
44
|
Arefan D, Mohamed AA, Berg WA, Zuley ML, Sumkin JH, Wu S. Deep learning modeling using normal mammograms for predicting breast cancer risk. Med Phys 2019; 47:110-118. [PMID: 31667873 DOI: 10.1002/mp.13886] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 08/30/2019] [Accepted: 10/16/2019] [Indexed: 02/06/2023] Open
Abstract
PURPOSE To investigate two deep learning-based modeling schemes for predicting short-term risk of developing breast cancer using prior normal screening digital mammograms in a case-control setting. METHODS We conducted a retrospective Institutional Review Board-approved study on a case-control cohort of 226 patients (including 113 women diagnosed with breast cancer and 113 controls) who underwent general population breast cancer screening. For each patient, a prior normal (i.e., with negative or benign findings) digital mammogram examination [including mediolateral oblique (MLO) view and craniocaudal (CC) view two images] was collected. Thus, a total of 452 normal images (226 MLO view images and 226 CC view images) of this case-control cohort were analyzed to predict the outcome, i.e., developing breast cancer (cancer cases) or remaining breast cancer-free (controls) within the follow-up period. We implemented an end-to-end deep learning model and a GoogLeNet-LDA model and compared their effects in several experimental settings using two mammographic view images and inputting two different subregions of the images to the models. The proposed models were also compared to logistic regression modeling of mammographic breast density. Area under the receiver operating characteristic curve (AUC) was used as the model performance metric. RESULTS The highest AUC was 0.73 [95% Confidence Interval (CI): 0.68-0.78; GoogLeNet-LDA model on CC view] when using the whole-breast and was 0.72 (95% CI: 0.67-0.76; GoogLeNet-LDA model on MLO + CC view) when using the dense tissue, respectively, as the model input. The GoogleNet-LDA model significantly (all P < 0.05) outperformed the end-to-end GoogLeNet model in all experiments. CC view was consistently more predictive than MLO view in both deep learning models, regardless of the input subregions. Both models exhibited superior performance than the percent breast density (AUC = 0.54; 95% CI: 0.49-0.59). CONCLUSIONS The proposed deep learning modeling approach can predict short-term breast cancer risk using normal screening mammogram images. Larger studies are needed to further reveal the promise of deep learning in enhancing imaging-based breast cancer risk assessment.
Collapse
Affiliation(s)
- Dooman Arefan
- Department of Radiology, University of Pittsburgh, School of Medicine, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA
| | - Aly A Mohamed
- Department of Radiology, University of Pittsburgh, School of Medicine, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA
| | - Wendie A Berg
- Department of Radiology, University of Pittsburgh, School of Medicine, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA.,Magee-Womens Hospital of University of Pittsburgh Medical Center, 300 Halket St, Pittsburgh, PA, 15213, USA
| | - Margarita L Zuley
- Department of Radiology, University of Pittsburgh, School of Medicine, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA.,Magee-Womens Hospital of University of Pittsburgh Medical Center, 300 Halket St, Pittsburgh, PA, 15213, USA
| | - Jules H Sumkin
- Department of Radiology, University of Pittsburgh, School of Medicine, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA.,Magee-Womens Hospital of University of Pittsburgh Medical Center, 300 Halket St, Pittsburgh, PA, 15213, USA
| | - Shandong Wu
- Departments of Radiology, Biomedical Informatics, Bioengineering, and Intelligent Systems Program, University of Pittsburgh, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA
| |
Collapse
|
45
|
Li X, Qin G, He Q, Sun L, Zeng H, He Z, Chen W, Zhen X, Zhou L. Digital breast tomosynthesis versus digital mammography: integration of image modalities enhances deep learning-based breast mass classification. Eur Radiol 2019; 30:778-788. [DOI: 10.1007/s00330-019-06457-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Revised: 08/01/2019] [Accepted: 09/12/2019] [Indexed: 12/24/2022]
Affiliation(s)
- Xin Li
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Genggeng Qin
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Qiang He
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Lei Sun
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Hui Zeng
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Zilong He
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Weiguo Chen
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Xin Zhen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, Guangdong, China.
| | - Linghong Zhou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, Guangdong, China.
| |
Collapse
|
46
|
Zou L, Yu S, Meng T, Zhang Z, Liang X, Xie Y. A Technical Review of Convolutional Neural Network-Based Mammographic Breast Cancer Diagnosis. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2019; 2019:6509357. [PMID: 31019547 PMCID: PMC6452645 DOI: 10.1155/2019/6509357] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Accepted: 02/25/2019] [Indexed: 12/27/2022]
Abstract
This study reviews the technique of convolutional neural network (CNN) applied in a specific field of mammographic breast cancer diagnosis (MBCD). It aims to provide several clues on how to use CNN for related tasks. MBCD is a long-standing problem, and massive computer-aided diagnosis models have been proposed. The models of CNN-based MBCD can be broadly categorized into three groups. One is to design shallow or to modify existing models to decrease the time cost as well as the number of instances for training; another is to make the best use of a pretrained CNN by transfer learning and fine-tuning; the third is to take advantage of CNN models for feature extraction, and the differentiation of malignant lesions from benign ones is fulfilled by using machine learning classifiers. This study enrolls peer-reviewed journal publications and presents technical details and pros and cons of each model. Furthermore, the findings, challenges and limitations are summarized and some clues on the future work are also given. Conclusively, CNN-based MBCD is at its early stage, and there is still a long way ahead in achieving the ultimate goal of using deep learning tools to facilitate clinical practice. This review benefits scientific researchers, industrial engineers, and those who are devoted to intelligent cancer diagnosis.
Collapse
Affiliation(s)
- Lian Zou
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
- Cancer Center of Sichuan Provincial People's Hospital, Chengdu, China
| | - Shaode Yu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Tiebao Meng
- Department of Medical Imaging, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Zhicheng Zhang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Xiaokun Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
- Medical Physics Division in the Department of Radiation Oncology, Stanford University, Palo Alto, CA, USA
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
47
|
Mumin NA, Rahmat K, Fadzli F, Ramli MT, Westerhout CJ, Ramli N, Rozalli FI, Ng KH. Diagnostic Efficacy of Synthesized 2D Digital Breast Tomosynthesis in Multi-ethnic Malaysian Population. Sci Rep 2019; 9:1459. [PMID: 30728394 PMCID: PMC6365555 DOI: 10.1038/s41598-018-37451-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2018] [Accepted: 11/23/2018] [Indexed: 11/11/2022] Open
Abstract
Synthesized 2D images can be reconstructed from tomosynthesis images in breast imaging. This study aims to investigate the diagnostic efficacy of synthesized 2D images (C-View) in comparison to full field digital mammography (FFDM) when used with digital breast tomosynthesis (DBT) in multi-ethnic Malaysian population. FFDM and C-View images (n = 380) were independently evaluated by three readers through Breast Imaging Reporting and Data System (BI-RADS) categorisation, breast density and lesion characterisation. Statistical analysis was done comparing sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of C-View + DBT with FFDM + DBT as standard of reference. Very good interreader agreement in BI-RADS category and density assessment between C-View + DBT and FFDM + DBT, with high sensitivity, specificity, PPV and NPV of C-View + DBT when compared with FFDM + DBT. There was comparable PPV between C-View + DBT and FFDM + DBT, with histopathology as gold standard. High level of interreader agreement in BI-RADS category and density assessment for FFDM + DBT and C-View + DBT. There was good agreement between FFDM + DBT and C-View + DBT in mass characterization, and almost perfect agreement in calcification and asymmetric density. 52.2% lower radiation dose incurred when using C-View + DBT. Hence, synthesized 2D images are comparable to FFDM with reduction in radiation dose within the limits of Malaysian multi-ethnic population.
Collapse
Affiliation(s)
- N Ab Mumin
- Biomedical Imaging Department, University Malaya Research Imaging Centre, Kuala Lumpur, Malaysia.,Radiology Department, Faculty of Medicine, University Teknologi MARA, Selangor, Malaysia
| | - K Rahmat
- Biomedical Imaging Department, University Malaya Research Imaging Centre, Kuala Lumpur, Malaysia.
| | - F Fadzli
- Biomedical Imaging Department, University Malaya Research Imaging Centre, Kuala Lumpur, Malaysia
| | - M T Ramli
- Biomedical Imaging Department, University Malaya Research Imaging Centre, Kuala Lumpur, Malaysia.,Radiology Department, Faculty of Medicine, University Teknologi MARA, Selangor, Malaysia
| | - C J Westerhout
- Biomedical Imaging Department, University Malaya Research Imaging Centre, Kuala Lumpur, Malaysia
| | - N Ramli
- Biomedical Imaging Department, University Malaya Research Imaging Centre, Kuala Lumpur, Malaysia
| | - F I Rozalli
- Biomedical Imaging Department, University Malaya Research Imaging Centre, Kuala Lumpur, Malaysia
| | - K H Ng
- Biomedical Imaging Department, University Malaya Research Imaging Centre, Kuala Lumpur, Malaysia
| |
Collapse
|
48
|
[Establishment of a deep feature-based classification model for distinguishing benign and malignant breast tumors on full-filed digital mammography]. NAN FANG YI KE DA XUE XUE BAO = JOURNAL OF SOUTHERN MEDICAL UNIVERSITY 2019; 39:88-92. [PMID: 30692072 PMCID: PMC6765570 DOI: 10.12122/j.issn.1673-4254.2019.01.14] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
OBJECTIVE To develop a deep features-based model to classify benign and malignant breast lesions on full- filed digital mammography. METHODS The data of full-filed digital mammography in both craniocaudal view and mediolateral oblique view from 106 patients with breast neoplasms were analyzed. Twenty-three handcrafted features (HCF) were extracted from the images of the breast tumors and a suitable feature set of HCF was selected using t-test. The deep features (DF) were extracted from the 3 pre-trained deep learning models, namely AlexNet, VGG16 and GoogLeNet. With abundant breast tumor information from the craniocaudal view and mediolateral oblique view, we combined the two extracted features (DF and HCF) as the two-view features. A multi-classifier model was finally constructed based on the combined HCF and DF sets. The classification ability of different deep learning networks was evaluated. RESULTS Quantitative evaluation results showed that the proposed HCF+DF model outperformed HCF model, and AlexNet produced the best performances among the 3 deep learning models. CONCLUSIONS The proposed model that combines DF and HCF sets of breast tumors can effectively distinguish benign and malignant breast lesions on full-filed digital mammography.
Collapse
|
49
|
|