1
|
Tong Y, Hu Z, Wang H, Huang J, Zhan Y, Chai W, Deng Y, Yuan Y, Shen K, Wang Y, Chen X, Yu J. Anti-HER2 therapy response assessment for guiding treatment (de-)escalation in early HER2-positive breast cancer using a novel deep learning radiomics model. Eur Radiol 2024; 34:5477-5486. [PMID: 38329503 PMCID: PMC11255056 DOI: 10.1007/s00330-024-10609-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Revised: 11/24/2023] [Accepted: 01/01/2024] [Indexed: 02/09/2024]
Abstract
OBJECTIVES Anti-HER2 targeted therapy significantly reduces risk of relapse in HER2 + breast cancer. New measures are needed for a precise risk stratification to guide (de-)escalation of anti-HER2 strategy. METHODS A total of 726 HER2 + cases who received no/single/dual anti-HER2 targeted therapies were split into three respective cohorts. A deep learning model (DeepTEPP) based on preoperative breast magnetic resonance (MR) was developed. Patients were scored and categorized into low-, moderate-, and high-risk groups. Recurrence-free survival (RFS) was compared in patients with different risk groups according to the anti-HER2 treatment they received, to validate the value of DeepTEPP in predicting treatment efficacy and guiding anti-HER2 strategy. RESULTS DeepTEPP was capable of risk stratification and guiding anti-HER2 treatment strategy: DeepTEPP-Low patients (60.5%) did not derive significant RFS benefit from trastuzumab (p = 0.144), proposing an anti-HER2 de-escalation. DeepTEPP-Moderate patients (19.8%) significantly benefited from trastuzumab (p = 0.048), but did not obtain additional improvements from pertuzumab (p = 0.125). DeepTEPP-High patients (19.7%) significantly benefited from dual HER2 blockade (p = 0.045), suggesting an anti-HER2 escalation. CONCLUSIONS DeepTEPP represents a pioneering MR-based deep learning model that enables the non-invasive prediction of adjuvant anti-HER2 effectiveness, thereby providing valuable guidance for anti-HER2 (de-)escalation strategies. DeepTEPP provides an important reference for choosing the appropriate individualized treatment in HER2 + breast cancer patients, warranting prospective validation. CLINICAL RELEVANCE STATEMENT We built an MR-based deep learning model DeepTEPP, which enables the non-invasive prediction of adjuvant anti-HER2 effectiveness, thus guiding anti-HER2 (de-)escalation strategies in early HER2-positive breast cancer patients. KEY POINTS • DeepTEPP is able to predict anti-HER2 effectiveness and to guide treatment (de-)escalation. • DeepTEPP demonstrated an impressive prognostic efficacy for recurrence-free survival and overall survival. • To our knowledge, this is one of the very few, also the largest study to test the efficacy of a deep learning model extracted from breast MR images on HER2-positive breast cancer survival and anti-HER2 therapy effectiveness prediction.
Collapse
Affiliation(s)
- Yiwei Tong
- Department of General Surgery, Comprehensive Breast Health Center, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, 197 Ruijin Er Road, Shanghai, 200025, China
| | - Zhaoyu Hu
- School of Information Science and Technology, Fudan University, No. 220, Handan Road, Yangpu District, Shanghai, 200433, China
| | - Haoyu Wang
- Department of General Surgery, Comprehensive Breast Health Center, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, 197 Ruijin Er Road, Shanghai, 200025, China
| | - Jiahui Huang
- Department of General Surgery, Comprehensive Breast Health Center, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, 197 Ruijin Er Road, Shanghai, 200025, China
| | - Ying Zhan
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Weimin Chai
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Yinhui Deng
- School of Information Science and Technology, Fudan University, No. 220, Handan Road, Yangpu District, Shanghai, 200433, China
| | - Ying Yuan
- Department of Radiology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Kunwei Shen
- Department of General Surgery, Comprehensive Breast Health Center, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, 197 Ruijin Er Road, Shanghai, 200025, China
| | - Yuanyuan Wang
- School of Information Science and Technology, Fudan University, No. 220, Handan Road, Yangpu District, Shanghai, 200433, China
| | - Xiaosong Chen
- Department of General Surgery, Comprehensive Breast Health Center, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, 197 Ruijin Er Road, Shanghai, 200025, China.
| | - Jinhua Yu
- School of Information Science and Technology, Fudan University, No. 220, Handan Road, Yangpu District, Shanghai, 200433, China.
| |
Collapse
|
2
|
Díaz O, Rodríguez-Ruíz A, Sechopoulos I. Artificial Intelligence for breast cancer detection: Technology, challenges, and prospects. Eur J Radiol 2024; 175:111457. [PMID: 38640824 DOI: 10.1016/j.ejrad.2024.111457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Accepted: 04/08/2024] [Indexed: 04/21/2024]
Abstract
PURPOSE This review provides an overview of the current state of artificial intelligence (AI) technology for automated detection of breast cancer in digital mammography (DM) and digital breast tomosynthesis (DBT). It aims to discuss the technology, available AI systems, and the challenges faced by AI in breast cancer screening. METHODS The review examines the development of AI technology in breast cancer detection, focusing on deep learning (DL) techniques and their differences from traditional computer-aided detection (CAD) systems. It discusses data pre-processing, learning paradigms, and the need for independent validation approaches. RESULTS DL-based AI systems have shown significant improvements in breast cancer detection. They have the potential to enhance screening outcomes, reduce false negatives and positives, and detect subtle abnormalities missed by human observers. However, challenges like the lack of standardised datasets, potential bias in training data, and regulatory approval hinder their widespread adoption. CONCLUSIONS AI technology has the potential to improve breast cancer screening by increasing accuracy and reducing radiologist workload. DL-based AI systems show promise in enhancing detection performance and eliminating variability among observers. Standardised guidelines and trustworthy AI practices are necessary to ensure fairness, traceability, and robustness. Further research and validation are needed to establish clinical trust in AI. Collaboration between researchers, clinicians, and regulatory bodies is crucial to address challenges and promote AI implementation in breast cancer screening.
Collapse
Affiliation(s)
- Oliver Díaz
- Artificial Intelligence in Medicine Laboratory, Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Spain; Computer Vision Center, Barcelona, Spain.
| | | | - Ioannis Sechopoulos
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands; Dutch Expert Centre for Screening (LRCB), Nijmegen, the Netherlands; Technical Medicine Center, University of Twente, Enschede, the Netherlands.
| |
Collapse
|
3
|
Lo Gullo R, Brunekreef J, Marcus E, Han LK, Eskreis-Winkler S, Thakur SB, Mann R, Groot Lipman K, Teuwen J, Pinker K. AI Applications to Breast MRI: Today and Tomorrow. J Magn Reson Imaging 2024. [PMID: 38581127 DOI: 10.1002/jmri.29358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 03/07/2024] [Accepted: 03/09/2024] [Indexed: 04/08/2024] Open
Abstract
In breast imaging, there is an unrelenting increase in the demand for breast imaging services, partly explained by continuous expanding imaging indications in breast diagnosis and treatment. As the human workforce providing these services is not growing at the same rate, the implementation of artificial intelligence (AI) in breast imaging has gained significant momentum to maximize workflow efficiency and increase productivity while concurrently improving diagnostic accuracy and patient outcomes. Thus far, the implementation of AI in breast imaging is at the most advanced stage with mammography and digital breast tomosynthesis techniques, followed by ultrasound, whereas the implementation of AI in breast magnetic resonance imaging (MRI) is not moving along as rapidly due to the complexity of MRI examinations and fewer available dataset. Nevertheless, there is persisting interest in AI-enhanced breast MRI applications, even as the use of and indications of breast MRI continue to expand. This review presents an overview of the basic concepts of AI imaging analysis and subsequently reviews the use cases for AI-enhanced MRI interpretation, that is, breast MRI triaging and lesion detection, lesion classification, prediction of treatment response, risk assessment, and image quality. Finally, it provides an outlook on the barriers and facilitators for the adoption of AI in breast MRI. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 6.
Collapse
Affiliation(s)
- Roberto Lo Gullo
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, New York, USA
| | - Joren Brunekreef
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Eric Marcus
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Lynn K Han
- Weill Cornell Medical College, New York-Presbyterian Hospital, New York City, New York, USA
| | - Sarah Eskreis-Winkler
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, New York, USA
| | - Sunitha B Thakur
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, New York, USA
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, New York, USA
| | - Ritse Mann
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Kevin Groot Lipman
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Jonas Teuwen
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Katja Pinker
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, New York, USA
| |
Collapse
|
4
|
Du Y, Wang D, Liu M, Zhang X, Ren W, Sun J, Yin C, Yang S, Zhang L. Study on the differential diagnosis of benign and malignant breast lesions using a deep learning model based on multimodal images. J Cancer Res Ther 2024; 20:625-632. [PMID: 38687933 DOI: 10.4103/jcrt.jcrt_1796_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 12/01/2023] [Indexed: 05/02/2024]
Abstract
OBJECTIVE To establish a multimodal model for distinguishing benign and malignant breast lesions. MATERIALS AND METHODS Clinical data, mammography, and MRI images (including T2WI, diffusion-weighted images (DWI), apparent diffusion coefficient (ADC), and DCE-MRI images) of 132 benign and breast cancer patients were analyzed retrospectively. The region of interest (ROI) in each image was marked and segmented using MATLAB software. The mammography, T2WI, DWI, ADC, and DCE-MRI models based on the ResNet34 network were trained. Using an integrated learning method, the five models were used as a basic model, and voting methods were used to construct a multimodal model. The dataset was divided into a training set and a prediction set. The accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of the model were calculated. The diagnostic efficacy of each model was analyzed using a receiver operating characteristic curve (ROC) and an area under the curve (AUC). The diagnostic value was determined by the DeLong test with statistically significant differences set at P < 0.05. RESULTS We evaluated the ability of the model to classify benign and malignant tumors using the test set. The AUC values of the multimodal model, mammography model, T2WI model, DWI model, ADC model and DCE-MRI model were 0.943, 0.645, 0.595, 0.905, 0.900, and 0.865, respectively. The diagnostic ability of the multimodal model was significantly higher compared with that of the mammography and T2WI models. However, compared with the DWI, ADC, and DCE-MRI models, there was no significant difference in the diagnostic ability of these models. CONCLUSION Our deep learning model based on multimodal image training has practical value for the diagnosis of benign and malignant breast lesions.
Collapse
Affiliation(s)
- Yanan Du
- Department of Health Management, The First Affiliated Hospital of Shandong First Medical University and Qianfoshan Hospital, Jinan City, Shandong Province, China
| | - Dawei Wang
- Department of Health Management Shandong University of Traditional Chinese Medicine, Jinan City, Shandong Province, China
| | - Menghan Liu
- Department of Health Management, The First Affiliated Hospital of Shandong First Medical University and Qianfoshan Hospital, Jinan City, Shandong Province, China
| | - Xiaodong Zhang
- Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan City, Shandong Province, China
| | - Wanqing Ren
- Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan City, Shandong Province, China
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University and Qianfoshan Hospital, Jinan City, Shandong Province, China
| | - Jingxiang Sun
- Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan City, Shandong Province, China
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University and Qianfoshan Hospital, Jinan City, Shandong Province, China
| | - Chao Yin
- Department of Radiology, Yantai Taocun Central Hospital, Yantai City, Shandong Province, China
| | - Shiwei Yang
- Department of Anorectal Surgery, The First Affiliated Hospital of Shandong First Medical University and Qianfoshan Hospital, Jinan City, Shandong Province, China
| | - Li Zhang
- Department of Pharmacology, Jinan Central Hospital Affiliated to Shandong First Medical University, Jinan City, Shandong Province, China
| |
Collapse
|
5
|
Yan S, Li J, Wu W. Artificial intelligence in breast cancer: application and future perspectives. J Cancer Res Clin Oncol 2023; 149:16179-16190. [PMID: 37656245 DOI: 10.1007/s00432-023-05337-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 08/24/2023] [Indexed: 09/02/2023]
Abstract
Breast cancer is one of the most common cancers and is one of the leading causes of cancer-related deaths in women worldwide. Early diagnosis and treatment are the key for a favorable prognosis. The application of artificial intelligence technology in the medical field is increasingly extensive, including image analysis, automated diagnosis, intelligent pharmaceutical system, personalized treatment and so on. AI-based breast cancer imaging, pathology and adjuvant therapy technology cannot only reduce the workload of clinicians, but also continuously improve the accuracy and sensitivity of breast cancer diagnosis and treatment. This paper reviews the application of AI in breast cancer, as well as looks ahead and poses challenges to the future development of AI for breast cancer detection and therapeutic, so as to provide ideas for future research.
Collapse
Affiliation(s)
- Shuixin Yan
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Jiadi Li
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Weizhu Wu
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China.
| |
Collapse
|
6
|
Zhang Y, Liu YL, Nie K, Zhou J, Chen Z, Chen JH, Wang X, Kim B, Parajuli R, Mehta RS, Wang M, Su MY. Deep Learning-based Automatic Diagnosis of Breast Cancer on MRI Using Mask R-CNN for Detection Followed by ResNet50 for Classification. Acad Radiol 2023; 30 Suppl 2:S161-S171. [PMID: 36631349 PMCID: PMC10515321 DOI: 10.1016/j.acra.2022.12.038] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 12/10/2022] [Accepted: 12/23/2022] [Indexed: 01/11/2023]
Abstract
RATIONALE AND OBJECTIVES Diagnosis of breast cancer on MRI requires, first, the identification of suspicious lesions; second, the characterization to give a diagnostic impression. We implemented Mask Reginal-Convolutional Neural Network (R-CNN) to detect abnormal lesions, followed by ResNet50 to estimate the malignancy probability. MATERIALS AND METHODS Two datasets were used. The first set had 176 cases, 103 cancer, and 73 benign. The second set had 84 cases, 53 cancer, and 31 benign. For detection, the pre-contrast image and the subtraction images of left and right breasts were used as inputs, so the symmetry could be considered. The detected suspicious area was characterized by ResNet50, using three DCE parametric maps as inputs. The results obtained using slice-based analyses were combined to give a lesion-based diagnosis. RESULTS In the first dataset, 101 of 103 cancers were detected by Mask R-CNN as suspicious, and 99 of 101 were correctly classified by ResNet50 as cancer, with a sensitivity of 99/103 = 96%. 48 of 73 benign lesions and 131 normal areas were identified as suspicious. Following classification by ResNet50, only 16 benign and 16 normal areas remained as malignant. The second dataset was used for independent testing. The sensitivity was 43/53 = 81%. Of the total of 121 identified non-cancerous lesions, only 6 of 31 benign lesions and 22 normal tissues were classified as malignant. CONCLUSION ResNet50 could eliminate approximately 80% of false positives detected by Mask R-CNN. Combining Mask R-CNN and ResNet50 has the potential to develop a fully-automatic computer-aided diagnostic system for breast cancer on MRI.
Collapse
Affiliation(s)
- Yang Zhang
- Department of Radiological Sciences, University of California, Irvine, California; Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, New Jersey
| | - Yan-Lin Liu
- Department of Radiological Sciences, University of California, Irvine, California
| | - Ke Nie
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, New Jersey
| | - Jiejie Zhou
- Department of Radiology, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Zhongwei Chen
- Department of Radiology, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Jeon-Hor Chen
- Department of Radiological Sciences, University of California, Irvine, California; Department of Radiology, E-Da Hospital and I-Shou University, Kaohsiung, Taiwan
| | - Xiao Wang
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, New Jersey
| | - Bomi Kim
- Department of Radiological Sciences, University of California, Irvine, California; Department of Breast Radiology, Ilsan Hospital, Goyang, South Korea
| | - Ritesh Parajuli
- Department of Medicine, University of California, Irvine, United States
| | - Rita S Mehta
- Department of Medicine, University of California, Irvine, United States
| | - Meihao Wang
- Department of Radiology, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Min-Ying Su
- Department of Radiological Sciences, University of California, Irvine, California; Department of Medical Imaging and Radiological Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan.
| |
Collapse
|
7
|
Sun R, Zhang X, Xie Y, Nie S. Weakly supervised breast lesion detection in DCE-MRI using self-transfer learning. Med Phys 2023; 50:4960-4972. [PMID: 36820793 DOI: 10.1002/mp.16296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 02/03/2023] [Accepted: 02/04/2023] [Indexed: 02/24/2023] Open
Abstract
BACKGROUND Breast cancer is a typically diagnosed and life-threatening cancer in women. Thus, dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is increasingly used for breast lesion detection and diagnosis because of the high resolution of soft tissues. Moreover, supervised detection methods have been implemented for breast lesion detection. However, these methods require substantial time and specialized staff to develop the labeled training samples. PURPOSE To investigate the potential of weakly supervised deep learning models for breast lesion detection. METHODS A total of 1003 breast DCE-MRI studies were collected, including 603 abnormal cases with 770 breast lesions and 400 normal subjects. The proposed model was trained using breast DCE-MRI considering only the image-level labels (normal and abnormal) and optimized for classification and detection sub-tasks simultaneously. Ablation experiments were performed to evaluate different convolutional neural network (CNN) backbones (VGG19 and ResNet50) as shared convolutional layers, as well as to evaluate the effect of the preprocessing methods. RESULTS Our weakly supervised model performed better with VGG19 than with ResNet50 (p < 0.05). The average precision (AP) of the classification sub-task was 91.7% for abnormal cases and 88.0% for normal samples. The area under the receiver operating characteristic (ROC) curve (AUC) was 0.939 (95% confidence interval [CI]: 0.920-0.941). The weakly supervised detection task AP was 85.7%, and the correct location (CorLoc) was 90.2%. A sensitivity of 84.0% at two-false positives per image was assessed based on free-response ROC (FROC) curve. CONCLUSIONS The results confirm that a weakly supervised CNN based on self-transfer learning is an effective and promising auxiliary tool for detecting breast lesions.
Collapse
Affiliation(s)
- Rong Sun
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Xiaobing Zhang
- Department of Radiology, Ruijin Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Yuanzhong Xie
- Medical Imaging Center, Taian Center Hospital, Shandong, China
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| |
Collapse
|
8
|
Adam R, Dell'Aquila K, Hodges L, Maldjian T, Duong TQ. Deep learning applications to breast cancer detection by magnetic resonance imaging: a literature review. Breast Cancer Res 2023; 25:87. [PMID: 37488621 PMCID: PMC10367400 DOI: 10.1186/s13058-023-01687-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 07/11/2023] [Indexed: 07/26/2023] Open
Abstract
Deep learning analysis of radiological images has the potential to improve diagnostic accuracy of breast cancer, ultimately leading to better patient outcomes. This paper systematically reviewed the current literature on deep learning detection of breast cancer based on magnetic resonance imaging (MRI). The literature search was performed from 2015 to Dec 31, 2022, using Pubmed. Other database included Semantic Scholar, ACM Digital Library, Google search, Google Scholar, and pre-print depositories (such as Research Square). Articles that were not deep learning (such as texture analysis) were excluded. PRISMA guidelines for reporting were used. We analyzed different deep learning algorithms, methods of analysis, experimental design, MRI image types, types of ground truths, sample sizes, numbers of benign and malignant lesions, and performance in the literature. We discussed lessons learned, challenges to broad deployment in clinical practice and suggested future research directions.
Collapse
Affiliation(s)
- Richard Adam
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Kevin Dell'Aquila
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Laura Hodges
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Takouhie Maldjian
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Tim Q Duong
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA.
| |
Collapse
|
9
|
Burger B, Bernathova M, Seeböck P, Singer CF, Helbich TH, Langs G. Deep learning for predicting future lesion emergence in high-risk breast MRI screening: a feasibility study. Eur Radiol Exp 2023; 7:32. [PMID: 37280478 DOI: 10.1186/s41747-023-00343-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 04/04/2023] [Indexed: 06/08/2023] Open
Abstract
BACKGROUND International societies have issued guidelines for high-risk breast cancer (BC) screening, recommending contrast-enhanced magnetic resonance imaging (CE-MRI) of the breast as a supplemental diagnostic tool. In our study, we tested the applicability of deep learning-based anomaly detection to identify anomalous changes in negative breast CE-MRI screens associated with future lesion emergence. METHODS In this prospective study, we trained a generative adversarial network on dynamic CE-MRI of 33 high-risk women who participated in a screening program but did not develop BC. We defined an anomaly score as the deviation of an observed CE-MRI scan from the model of normal breast tissue variability. We evaluated the anomaly score's association with future lesion emergence on the level of local image patches (104,531 normal patches, 455 patches of future lesion location) and entire CE-MRI exams (21 normal, 20 with future lesion). Associations were analyzed by receiver operating characteristic (ROC) curves on the patch level and logistic regression on the examination level. RESULTS The local anomaly score on image patches was a good predictor for future lesion emergence (area under the ROC curve 0.804). An exam-level summary score was significantly associated with the emergence of lesions at any location at a later time point (p = 0.045). CONCLUSIONS Breast cancer lesions are associated with anomalous appearance changes in breast CE-MRI occurring before the lesion emerges in high-risk women. These early image signatures are detectable and may be a basis for adjusting individual BC risk and personalized screening. RELEVANCE STATEMENT Anomalies in screening MRI preceding lesion emergence in women at high-risk of breast cancer may inform individualized screening and intervention strategies. KEY POINTS • Breast lesions are associated with preceding anomalies in CE-MRI of high-risk women. • Deep learning-based anomaly detection can help to adjust risk assessment for future lesions. • An appearance anomaly score may be used for adjusting screening interval times.
Collapse
Affiliation(s)
- Bianca Burger
- Department of Biomedical Imaging and Image-Guided Therapy, Division of Computational Imaging Research (CIR), Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - Maria Bernathova
- Department of Biomedical Imaging and Image-Guided Therapy, Division of General and Pediatric Radiology, Medical University of Vienna, Vienna, Austria
| | - Philipp Seeböck
- Department of Biomedical Imaging and Image-Guided Therapy, Division of Computational Imaging Research (CIR), Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
| | - Christian F Singer
- Department of Obstetrics and Gynecology, Division of Special Gynecology, Medical University of Vienna, Vienna, Austria
- Comprehensive Cancer Center, Medical University of Vienna, Vienna, Austria
| | - Thomas H Helbich
- Department of Biomedical Imaging and Image-Guided Therapy, Division of General and Pediatric Radiology, Medical University of Vienna, Vienna, Austria
| | - Georg Langs
- Department of Biomedical Imaging and Image-Guided Therapy, Division of Computational Imaging Research (CIR), Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria.
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
10
|
Zhao X, Bai JW, Guo Q, Ren K, Zhang GJ. Clinical applications of deep learning in breast MRI. Biochim Biophys Acta Rev Cancer 2023; 1878:188864. [PMID: 36822377 DOI: 10.1016/j.bbcan.2023.188864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 01/05/2023] [Accepted: 01/17/2023] [Indexed: 02/25/2023]
Abstract
Deep learning (DL) is one of the most powerful data-driven machine-learning techniques in artificial intelligence (AI). It can automatically learn from raw data without manual feature selection. DL models have led to remarkable advances in data extraction and analysis for medical imaging. Magnetic resonance imaging (MRI) has proven useful in delineating the characteristics and extent of breast lesions and tumors. This review summarizes the current state-of-the-art applications of DL models in breast MRI. Many recent DL models were examined in this field, along with several advanced learning approaches and methods for data normalization and breast and lesion segmentation. For clinical applications, DL-based breast MRI models were proven useful in five aspects: diagnosis of breast cancer, classification of molecular types, classification of histopathological types, prediction of neoadjuvant chemotherapy response, and prediction of lymph node metastasis. For subsequent studies, further improvement in data acquisition and preprocessing is necessary, additional DL techniques in breast MRI should be investigated, and wider clinical applications need to be explored.
Collapse
Affiliation(s)
- Xue Zhao
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Jing-Wen Bai
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Oncology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China
| | - Qiu Guo
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Ke Ren
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China.
| | - Guo-Jun Zhang
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China.
| |
Collapse
|
11
|
Yang Z, Cong C, Pagnucco M, Song Y. Multi-scale multi-reception attention network for bone age assessment in X-ray images. Neural Netw 2023; 158:249-257. [PMID: 36473292 DOI: 10.1016/j.neunet.2022.11.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 10/18/2022] [Accepted: 11/03/2022] [Indexed: 11/16/2022]
Abstract
Bone age assessment plays a significant role in estimating bone maturity. However, radiograph/X-ray images of hand bones contain a large amount of redundant information. Some detection or segmentation based methods have recently been proposed to solve this issue. These network structures are often of high complexity and might require extra annotations, which make them less applicable in practice. In this paper, we present a Multi-scale Multi-reception Attention Net (MMANet), which combines a novel Multi-scale Multi-reception Complement Attention (MMCA) network and a graph attention module with a ResNet backbone to enhance the feature representation of key regions and suppress the influence of background regions to achieve significant performance improvement. Experimental results show our MMANet is able to accurately detect key regions and achieves 3.88 mean absolute error (MAE) on the RSNA 2017 Paediatric Bone Age Challenge dataset. Our method, without explicit modelling of anatomical information, outperforms the current state-of-the-art method (MAE=3.91) by 0.03 (months) which requires extra annotations. Code is available at https://github.com/yzc1122333/BoneAgeAss.
Collapse
Affiliation(s)
- Zhichao Yang
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Cong Cong
- School of Computer Science and Engineering, University of New South Wales, Australia.
| | - Maurice Pagnucco
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Yang Song
- School of Computer Science and Engineering, University of New South Wales, Australia
| |
Collapse
|
12
|
Effects of Image Quality on the Accuracy Human Pose Estimation and Detection of Eye Lid Opening/Closing Using Openpose and DLib. J Imaging 2022; 8:jimaging8120330. [PMID: 36547495 PMCID: PMC9783075 DOI: 10.3390/jimaging8120330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 11/25/2022] [Accepted: 12/15/2022] [Indexed: 12/23/2022] Open
Abstract
OBJECTIVE The application of computer models in continuous patient activity monitoring using video cameras is complicated by the capture of images of varying qualities due to poor lighting conditions and lower image resolutions. Insufficient literature has assessed the effects of image resolution, color depth, noise level, and low light on the inference of eye opening and closing and body landmarks from digital images. METHOD This study systematically assessed the effects of varying image resolutions (from 100 × 100 pixels to 20 × 20 pixels at an interval of 10 pixels), lighting conditions (from 42 to 2 lux with an interval of 2 lux), color-depths (from 16.7 M colors to 8 M, 1 M, 512 K, 216 K, 64 K, 8 K, 1 K, 729, 512, 343, 216, 125, 64, 27, and 8 colors), and noise levels on the accuracy and model performance in eye dimension estimation and body keypoint localization using the Dlib library and OpenPose with images from the Closed Eyes in the Wild and the COCO datasets, as well as photographs of the face captured at different light intensities. RESULTS The model accuracy and rate of model failure remained acceptable at an image resolution of 60 × 60 pixels, a color depth of 343 colors, a light intensity of 14 lux, and a Gaussian noise level of 4% (i.e., 4% of pixels replaced by Gaussian noise). CONCLUSIONS The Dlib and OpenPose models failed to detect eye dimensions and body keypoints only at low image resolutions, lighting conditions, and color depths. CLINICAL IMPACT Our established baseline threshold values will be useful for future work in the application of computer vision in continuous patient monitoring.
Collapse
|
13
|
Xue H, Qian G, Wu X, Gao Y, Yang H, Liu M, Wang L, Chen R, Wang P. A coarse-to-fine and automatic algorithm for breast diagnosis on multi-series MRI images. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.1054158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
IntroductionEarly breast carcinomas can be effectively diagnosed and controlled. However, it demands extra work and radiologist in China often suffer from overtime working due to too many patients, even experienced ones could make mistakes after overloaded work. To improve the efficiency and reduce the rate of misdiagnosis, automatic breast diagnosis on Magnetic Resonance Imaging (MRI) images is vital yet challenging for breast disease screening and successful treatment planning. There are some obstacles that hinder the development of automatic approaches, such as class-imbalance of samples, hard mimics of lesions, etc. In this paper, we propose a coarse-to-fine algorithm to address those problems of automatic breast diagnosis on multi-series MRI images. The algorithm utilizes deep learning techniques to provide breast segmentation, tumor segmentation and tumor classification functions, thus supporting doctors' decisions in clinical practice.MethodsIn proposed algorithm, a DenseUNet is firstly employed to extract breast-related regions by removing irrelevant parts in the thoracic cavity. Then, by taking advantage of the attention mechanism and the focal loss, a novel network named Attention Dense UNet (ADUNet) is designed for the tumor segmentation. Particularly, the focal loss in ADUNet addresses class-imbalance and model overwhelmed problems. Finally, a customized network is developed for the tumor classification. Besides, while most approaches only consider one or two series, the proposed algorithm takes in account multiple series of MRI images.ResultsExtensive experiments are carried out to evaluate its performance on 435 multi-series MRI volumes from 87 patients collected from Tongji Hospital. In the dataset, all cases are with benign, malignant, or both type of tumors, the category of which covers carcinoma, fibroadenoma, cyst and abscess. The ground truths of tumors are labeled by two radiologists with 3 years of experience on breast MRI reporting by drawing contours of tumor slice by slice. ADUNet is compared with other prevalent deep-learning methods on the tumor segmentation and quantitative results, and achieves the best performance on both Case Dice Score and Global Dice Score by 0.748 and 0.801 respectively. Moreover, the customized classification network outperforms two CNN-M based models and achieves tumor-level and case-level AUC by 0.831 and 0.918 respectively.DiscussionAll data in this paper are collected from the same MRI device, thus it is reasonable to assume that they are from the same domain and independent identically distributed. Whether the proposed algorithm is robust enough in a multi-source case still remains an open question. Each stage of the proposed algorithm is trained separately, which makes each stage more robust and converge faster. Such training strategy considers each stage as a separate task and does not take into account the relationships between tasks.
Collapse
|
14
|
Applying Deep Learning for Breast Cancer Detection in Radiology. Curr Oncol 2022; 29:8767-8793. [PMID: 36421343 PMCID: PMC9689782 DOI: 10.3390/curroncol29110690] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 11/12/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022] Open
Abstract
Recent advances in deep learning have enhanced medical imaging research. Breast cancer is the most prevalent cancer among women, and many applications have been developed to improve its early detection. The purpose of this review is to examine how various deep learning methods can be applied to breast cancer screening workflows. We summarize deep learning methods, data availability and different screening methods for breast cancer including mammography, thermography, ultrasound and magnetic resonance imaging. In this review, we will explore deep learning in diagnostic breast imaging and describe the literature review. As a conclusion, we discuss some of the limitations and opportunities of integrating artificial intelligence into breast cancer clinical practice.
Collapse
|
15
|
Yue W, Zhang H, Zhou J, Li G, Tang Z, Sun Z, Cai J, Tian N, Gao S, Dong J, Liu Y, Bai X, Sheng F. Deep learning-based automatic segmentation for size and volumetric measurement of breast cancer on magnetic resonance imaging. Front Oncol 2022; 12:984626. [PMID: 36033453 PMCID: PMC9404224 DOI: 10.3389/fonc.2022.984626] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Accepted: 07/19/2022] [Indexed: 11/30/2022] Open
Abstract
Purpose In clinical work, accurately measuring the volume and the size of breast cancer is significant to develop a treatment plan. However, it is time-consuming, and inter- and intra-observer variations among radiologists exist. The purpose of this study was to assess the performance of a Res-UNet convolutional neural network based on automatic segmentation for size and volumetric measurement of mass enhancement breast cancer on magnetic resonance imaging (MRI). Materials and methods A total of 1,000 female breast cancer patients who underwent preoperative 1.5-T dynamic contrast-enhanced MRI prior to treatment were selected from January 2015 to October 2021 and randomly divided into a training cohort (n = 800) and a testing cohort (n = 200). Compared with the masks named ground truth delineated manually by radiologists, the model performance on segmentation was evaluated with dice similarity coefficient (DSC) and intraclass correlation coefficient (ICC). The performance of tumor (T) stage classification was evaluated with accuracy, sensitivity, and specificity. Results In the test cohort, the DSC of automatic segmentation reached 0.89. Excellent concordance (ICC > 0.95) of the maximal and minimal diameter and good concordance (ICC > 0.80) of volumetric measurement were shown between the model and the radiologists. The trained model took approximately 10–15 s to provide automatic segmentation and classified the T stage with an overall accuracy of 0.93, sensitivity of 0.94, 0.94, and 0.75, and specificity of 0.95, 0.92, and 0.99, respectively, in T1, T2, and T3. Conclusions Our model demonstrated good performance and reliability for automatic segmentation for size and volumetric measurement of breast cancer, which can be time-saving and effective in clinical decision-making.
Collapse
Affiliation(s)
- Wenyi Yue
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
- Chinese PLA General Medical School, Beijing, China
| | - Hongtao Zhang
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Juan Zhou
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Guang Li
- Keya Medical Technology Co., Ltd., Beijing, China
| | - Zhe Tang
- Keya Medical Technology Co., Ltd., Beijing, China
| | - Zeyu Sun
- Keya Medical Technology Co., Ltd., Beijing, China
| | - Jianming Cai
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Ning Tian
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Shen Gao
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Jinghui Dong
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Yuan Liu
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Xu Bai
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Fugeng Sheng
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
- *Correspondence: Fugeng Sheng,
| |
Collapse
|
16
|
Zhu J, Geng J, Shan W, Zhang B, Shen H, Dong X, Liu M, Li X, Cheng L. Development and validation of a deep learning model for breast lesion segmentation and characterization in multiparametric MRI. Front Oncol 2022; 12:946580. [PMID: 36033449 PMCID: PMC9402900 DOI: 10.3389/fonc.2022.946580] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 07/12/2022] [Indexed: 11/13/2022] Open
Abstract
Importance The utilization of artificial intelligence for the differentiation of benign and malignant breast lesions in multiparametric MRI (mpMRI) assists radiologists to improve diagnostic performance. Objectives To develop an automated deep learning model for breast lesion segmentation and characterization and to evaluate the characterization performance of AI models and radiologists. Materials and methods For lesion segmentation, 2,823 patients were used for the training, validation, and testing of the VNet-based segmentation models, and the average Dice similarity coefficient (DSC) between the manual segmentation by radiologists and the mask generated by VNet was calculated. For lesion characterization, 3,303 female patients with 3,607 pathologically confirmed lesions (2,213 malignant and 1,394 benign lesions) were used for the three ResNet-based characterization models (two single-input and one multi-input models). Histopathology was used as the diagnostic criterion standard to assess the characterization performance of the AI models and the BI-RADS categorized by the radiologists, in terms of sensitivity, specificity, accuracy, and the area under the receiver operating characteristic curve (AUC). An additional 123 patients with 136 lesions (81 malignant and 55 benign lesions) from another institution were available for external testing. Results Of the 5,811 patients included in the study, the mean age was 46.14 (range 11–89) years. In the segmentation task, a DSC of 0.860 was obtained between the VNet-generated mask and manual segmentation by radiologists. In the characterization task, the AUCs of the multi-input and the other two single-input models were 0.927, 0.821, and 0.795, respectively. Compared to the single-input DWI or DCE model, the multi-input DCE and DWI model obtained a significant increase in sensitivity, specificity, and accuracy (0.831 vs. 0.772/0.776, 0.874 vs. 0.630/0.709, 0.846 vs. 0.721/0.752). Furthermore, the specificity of the multi-input model was higher than that of the radiologists, whether using BI-RADS category 3 or 4 as a cutoff point (0.874 vs. 0.404/0.841), and the accuracy was intermediate between the two assessment methods (0.846 vs. 0.773/0.882). For the external testing, the performance of the three models remained robust with AUCs of 0.812, 0.831, and 0.885, respectively. Conclusions Combining DCE with DWI was superior to applying a single sequence for breast lesion characterization. The deep learning computer-aided diagnosis (CADx) model we developed significantly improved specificity and achieved comparable accuracy to the radiologists with promise for clinical application to provide preliminary diagnoses.
Collapse
Affiliation(s)
- Jingjin Zhu
- School of Medicine, Nankai University, Tianjin, China
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Jiahui Geng
- Department of Neurology, Beijing Tiantan Hospital, Beijing, China
| | - Wei Shan
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Boya Zhang
- School of Medicine, Nankai University, Tianjin, China
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Huaqing Shen
- Department of Neurology, Beijing Tiantan Hospital, Beijing, China
| | - Xiaohan Dong
- Department of Radiology, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Mei Liu
- Department of Pathology, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Xiru Li
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
- *Correspondence: Liuquan Cheng, ; Xiru Li,
| | - Liuquan Cheng
- Department of Radiology, Chinese People’s Liberation Army General Hospital, Beijing, China
- *Correspondence: Liuquan Cheng, ; Xiru Li,
| |
Collapse
|
17
|
Yoo JE, Rho M. Large-Scale Survey Data Analysis with Penalized Regression: A Monte Carlo Simulation on Missing Categorical Predictors. MULTIVARIATE BEHAVIORAL RESEARCH 2022; 57:642-657. [PMID: 33703972 DOI: 10.1080/00273171.2021.1891856] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
With the advent of the big data era, machine learning methods have evolved and proliferated. This study focused on penalized regression, a procedure that builds interpretive prediction models among machine learning methods. In particular, penalized regression coupled with large-scale data can explore hundreds or thousands of variables in one statistical model without convergence problems and identify yet uninvestigated important predictors. As one of the first Monte Carlo simulation studies to investigate predictive modeling with missing categorical predictors in the context of social science research, this study endeavored to emulate real social science large-scale data. Likert-scaled variables were simulated as well as multiple-category and count variables. Due to the inclusion of the categorical predictors in modeling, penalized regression methods that consider the grouping effect were employed such as group Mnet. We also examined the applicability of the simulation conditions with a real large-scale dataset that the simulation study referenced. Particularly, the study presented selection counts of variables after multiple iterations of modeling in order to consider the bias resulting from data-splitting in model validation. Selection counts turned out to be a necessary tool when variable selection is of research interest. Efforts to utilize large-scale data to the fullest appear to offer a valid approach to mitigate the effect of nonignorable missingness. Overall, penalized regression which assumes linearity is a viable method to analyze social science large-scale survey data.
Collapse
Affiliation(s)
- Jin Eun Yoo
- Department of Education, Korea National University of Education
| | - Minjeong Rho
- Department of Education, Korea National University of Education
| |
Collapse
|
18
|
Thakran S, Gupta RK, Singh A. Characterization of breast tumors using machine learning based upon multiparametric magnetic resonance imaging features. NMR IN BIOMEDICINE 2022; 35:e4665. [PMID: 34962326 DOI: 10.1002/nbm.4665] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Revised: 11/22/2021] [Accepted: 11/23/2021] [Indexed: 06/14/2023]
Abstract
Magnetic resonance imaging (MRI) is playing an important role in the classification of breast tumors. MRI can be used to obtain multiparametric (mp) information, such as structural, hemodynamic, and physiological information. Quantitative analysis of mp-MRI data has shown potential in improving the accuracy of breast tumor classification. In general, a large set of quantitative and texture features can be generated depending upon the type of methodology used. A suitable combination of selected quantitative and texture features can further improve the accuracy of tumor classification. Machine learning (ML) classifiers based upon features derived from MRI data have shown potential in tumor classification. There is a need for further research studies on selecting an appropriate combination of features and evaluating the performance of different ML classifiers for accurate classification of breast tumors. The objective of the current study was to develop and optimize an ML framework based upon mp-MRI features for the characterization of breast tumors (malignant vs. benign and low- vs. high-grade). This study included the breast mp-MRI data of 60 female patients with histopathology results. A total of 128 features were extracted from the mp-MRI tumor data followed by features selection. Five ML classifiers were evaluated for tumor classification using 10-fold crossvalidation with 10 repetitions. The support vector machine (SVM) classifier based on optimum features selected using a wrapper method with an adaptive boosting (AdaBoost) technique provided the highest sensitivity (0.96 ± 0.03), specificity (0.92 ± 0.09), and accuracy (94% ± 2.91%) in the classification of malignant versus benign tumors. This method also provided the highest sensitivity (0.94 ± 0.07), specificity (0.80 ± 0.05), and accuracy (90% ± 5.48%) in the classification of low- versus high-grade tumors. These findings suggest that the SVM classifier outperformed other ML methods in the binary classification of breast tumors.
Collapse
Affiliation(s)
- Snekha Thakran
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India
| | - Rakesh Kumar Gupta
- Department of Radiology, Fortis Memorial Research Institute, Gurgaon, India
| | - Anup Singh
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India
- Department for Biomedical Engineering, All India Institute of Medical Sciences, New Delhi, India
| |
Collapse
|
19
|
Heo J, Lim JH, Lee HR, Jang JY, Shin YS, Kim D, Lim JY, Park YM, Koh YW, Ahn SH, Chung EJ, Lee DY, Seok J, Kim CH. Deep learning model for tongue cancer diagnosis using endoscopic images. Sci Rep 2022; 12:6281. [PMID: 35428854 PMCID: PMC9012779 DOI: 10.1038/s41598-022-10287-9] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 03/29/2022] [Indexed: 12/29/2022] Open
Abstract
In this study, we developed a deep learning model to identify patients with tongue cancer based on a validated dataset comprising oral endoscopic images. We retrospectively constructed a dataset of 12,400 verified endoscopic images from five university hospitals in South Korea, collected between 2010 and 2020 with the participation of otolaryngologists. To calculate the probability of malignancy using various convolutional neural network (CNN) architectures, several deep learning models were developed. Of the 12,400 total images, 5576 images related to the tongue were extracted. The CNN models showed a mean area under the receiver operating characteristic curve (AUROC) of 0.845 and a mean area under the precision-recall curve (AUPRC) of 0.892. The results indicate that the best model was DenseNet169 (AUROC 0.895 and AUPRC 0.918). The deep learning model, general physicians, and oncology specialists had sensitivities of 81.1%, 77.3%, and 91.7%; specificities of 86.8%, 75.0%, and 90.9%; and accuracies of 84.7%, 75.9%, and 91.2%, respectively. Meanwhile, fair agreement between the oncologist and the developed model was shown for cancer diagnosis (kappa value = 0.685). The deep learning model developed based on the verified endoscopic image dataset showed acceptable performance in tongue cancer diagnosis.
Collapse
Affiliation(s)
- Jaesung Heo
- Department of Radiation Oncology, Ajou University School of Medicine, Suwon, Republic of Korea
| | - June Hyuck Lim
- Department of Radiation Oncology, Ajou University School of Medicine, Suwon, Republic of Korea
| | - Hye Ran Lee
- Department of Otolaryngology, Ajou University School of Medicine, 164 Worldcup-ro, Yeongtong-gu, Suwon, 16499, Republic of Korea
| | - Jeon Yeob Jang
- Department of Otolaryngology, Ajou University School of Medicine, 164 Worldcup-ro, Yeongtong-gu, Suwon, 16499, Republic of Korea
| | - Yoo Seob Shin
- Department of Otolaryngology, Ajou University School of Medicine, 164 Worldcup-ro, Yeongtong-gu, Suwon, 16499, Republic of Korea
| | - Dahee Kim
- Department of Otorhinolaryngology, Yonsei University, Seoul, Republic of Korea
| | - Jae Yol Lim
- Department of Otorhinolaryngology, Yonsei University, Seoul, Republic of Korea
| | - Young Min Park
- Department of Otorhinolaryngology, Yonsei University, Seoul, Republic of Korea
| | - Yoon Woo Koh
- Department of Otorhinolaryngology, Yonsei University, Seoul, Republic of Korea
| | - Soon-Hyun Ahn
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul, Republic of Korea
| | - Eun-Jae Chung
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul, Republic of Korea
| | - Doh Young Lee
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital, Seoul, Republic of Korea
| | - Jungirl Seok
- Department of Otorhinolaryngology-Head & Neck Surgery, National Cancer Center, Goyang, Republic of Korea
| | - Chul-Ho Kim
- Department of Otolaryngology, Ajou University School of Medicine, 164 Worldcup-ro, Yeongtong-gu, Suwon, 16499, Republic of Korea.
| |
Collapse
|
20
|
Wu Y, Wu J, Dou Y, Rubert N, Wang Y, Deng J. A deep learning fusion model with evidence-based confidence level analysis for differentiation of malignant and benign breast tumors using dynamic contrast enhanced MRI. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
21
|
Yang J, Ju J, Guo L, Ji B, Shi S, Yang Z, Gao S, Yuan X, Tian G, Liang Y, Yuan P. Prediction of HER2-positive breast cancer recurrence and metastasis risk from histopathological images and clinical information via multimodal deep learning. Comput Struct Biotechnol J 2022; 20:333-342. [PMID: 35035786 PMCID: PMC8733169 DOI: 10.1016/j.csbj.2021.12.028] [Citation(s) in RCA: 80] [Impact Index Per Article: 40.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 12/12/2021] [Accepted: 12/18/2021] [Indexed: 12/18/2022] Open
Abstract
HER2-positive breast cancer is a highly heterogeneous tumor, and about 30% of patients still suffer from recurrence and metastasis after trastuzumab targeted therapy. Predicting individual prognosis is of great significance for the further development of precise therapy. With the continuous development of computer technology, more and more attention has been paid to computer-aided diagnosis and prognosis prediction based on Hematoxylin and Eosin (H&E) pathological images, which are available for all breast cancer patients undergone surgical treatment. In this study, we first enrolled 127 HER2-positive breast cancer patients with known recurrence and metastasis status from Cancer Hospital of the Chinese Academy of Medical Sciences. We then proposed a novel multimodal deep learning method integrating whole slide H&E images (WSIs) and clinical information to accurately assess the risk of relapse and metastasis in patients with HER2-positive breast cancer. Specifically, we obtained the whole H&E staining images from the surgical specimens of breast cancer patients, and these images were adjusted to size 512 × 512 pixels. The deep convolutional neural network (CNN) was applied to these images to retrieve image features, which were combined with the clinical data. Based on the combined features. After that, a novel multimodal model was constructed for predicting the prognosis of each patient. The model achieved an area under curve (AUC) of 0.76 in the two-fold cross-validation (CV). To further evaluate the performance of our model, we downloaded the data of all 123 HER2-positive breast cancer patients with available H&E image and known recurrence and metastasis status in The Cancer Genome Atlas (TCGA), which was severed as an independent testing data. Despite the huge differences in race and experimental strategies, our model achieved an AUC of 0.72 in the TCGA samples. As a conclusion, H&E images, in conjunction with clinical information and advanced deep learning models, could be used to evaluate the risk of relapse and metastasis in patients with HER2-positive breast cancer.
Collapse
|
22
|
Frankhouser DE, Dietze E, Mahabal A, Seewaldt VL. Vascularity and Dynamic Contrast-Enhanced Breast Magnetic Resonance Imaging. FRONTIERS IN RADIOLOGY 2021; 1:735567. [PMID: 37492179 PMCID: PMC10364989 DOI: 10.3389/fradi.2021.735567] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Accepted: 11/11/2021] [Indexed: 07/27/2023]
Abstract
Angiogenesis is a key step in the initiation and progression of an invasive breast cancer. High microvessel density by morphological characterization predicts metastasis and poor survival in women with invasive breast cancers. However, morphologic characterization is subject to variability and only can evaluate a limited portion of an invasive breast cancer. Consequently, breast Magnetic Resonance Imaging (MRI) is currently being evaluated to assess vascularity. Recently, through the new field of radiomics, dynamic contrast enhanced (DCE)-MRI is being used to evaluate vascular density, vascular morphology, and detection of aggressive breast cancer biology. While DCE-MRI is a highly sensitive tool, there are specific features that limit computational evaluation of blood vessels. These include (1) DCE-MRI evaluates gadolinium contrast and does not directly evaluate biology, (2) the resolution of DCE-MRI is insufficient for imaging small blood vessels, and (3) DCE-MRI images are very difficult to co-register. Here we review computational approaches for detection and analysis of blood vessels in DCE-MRI images and present some of the strategies we have developed for co-registry of DCE-MRI images and early detection of vascularization.
Collapse
Affiliation(s)
- David E. Frankhouser
- Department of Population Sciences, City of Hope National Medical Center, Duarte, CA, United States
| | - Eric Dietze
- Department of Population Sciences, City of Hope National Medical Center, Duarte, CA, United States
| | - Ashish Mahabal
- Department of Astronomy, Division of Physics, Mathematics, and Astronomy, California Institute of Technology (Caltech), Pasadena, CA, United States
| | - Victoria L. Seewaldt
- Department of Population Sciences, City of Hope National Medical Center, Duarte, CA, United States
| |
Collapse
|
23
|
Artificial Intelligence Evidence-Based Current Status and Potential for Lower Limb Vascular Management. J Pers Med 2021; 11:jpm11121280. [PMID: 34945749 PMCID: PMC8705683 DOI: 10.3390/jpm11121280] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 11/22/2021] [Accepted: 11/24/2021] [Indexed: 12/14/2022] Open
Abstract
Consultation prioritization is fundamental in optimal healthcare management and its performance can be helped by artificial intelligence (AI)-dedicated software and by digital medicine in general. The need for remote consultation has been demonstrated not only in the pandemic-induced lock-down but also in rurality conditions for which access to health centers is constantly limited. The term “AI” indicates the use of a computer to simulate human intellectual behavior with minimal human intervention. AI is based on a “machine learning” process or on an artificial neural network. AI provides accurate diagnostic algorithms and personalized treatments in many fields, including oncology, ophthalmology, traumatology, and dermatology. AI can help vascular specialists in diagnostics of peripheral artery disease, cerebrovascular disease, and deep vein thrombosis by analyzing contrast-enhanced magnetic resonance imaging or ultrasound data and in diagnostics of pulmonary embolism on multi-slice computed angiograms. Automatic methods based on AI may be applied to detect the presence and determine the clinical class of chronic venous disease. Nevertheless, data on using AI in this field are still scarce. In this narrative review, the authors discuss available data on AI implementation in arterial and venous disease diagnostics and care.
Collapse
|
24
|
Bitencourt A, Daimiel Naranjo I, Lo Gullo R, Rossi Saccarelli C, Pinker K. AI-enhanced breast imaging: Where are we and where are we heading? Eur J Radiol 2021; 142:109882. [PMID: 34392105 PMCID: PMC8387447 DOI: 10.1016/j.ejrad.2021.109882] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 07/15/2021] [Accepted: 07/26/2021] [Indexed: 12/22/2022]
Abstract
Significant advances in imaging analysis and the development of high-throughput methods that can extract and correlate multiple imaging parameters with different clinical outcomes have led to a new direction in medical research. Radiomics and artificial intelligence (AI) studies are rapidly evolving and have many potential applications in breast imaging, such as breast cancer risk prediction, lesion detection and classification, radiogenomics, and prediction of treatment response and clinical outcomes. AI has been applied to different breast imaging modalities, including mammography, ultrasound, and magnetic resonance imaging, in different clinical scenarios. The application of AI tools in breast imaging has an unprecedented opportunity to better derive clinical value from imaging data and reshape the way we care for our patients. The aim of this study is to review the current knowledge and future applications of AI-enhanced breast imaging in clinical practice.
Collapse
Affiliation(s)
- Almir Bitencourt
- Department of Imaging, A.C.Camargo Cancer Center, Sao Paulo, SP, Brazil; Dasa, Sao Paulo, SP, Brazil
| | - Isaac Daimiel Naranjo
- Department of Radiology, Breast Imaging Service, Guy's and St. Thomas' NHS Trust, Great Maze Pond, London, UK
| | - Roberto Lo Gullo
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | | | - Katja Pinker
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
| |
Collapse
|
25
|
Lassau N, Bousaid I, Chouzenoux E, Verdon A, Balleyguier C, Bidault F, Mousseaux E, Harguem-Zayani S, Gaillandre L, Bensalah Z, Doutriaux-Dumoulin I, Monroc M, Haquin A, Ceugnart L, Bachelle F, Charlot M, Thomassin-Naggara I, Fourquet T, Dapvril H, Orabona J, Chamming's F, El Haik M, Zhang-Yin J, Guillot MS, Ohana M, Caramella T, Diascorn Y, Airaud JY, Cuingnet P, Gencer U, Lawrance L, Luciani A, Cotten A, Meder JF. Three artificial intelligence data challenges based on CT and ultrasound. Diagn Interv Imaging 2021; 102:669-674. [PMID: 34312111 DOI: 10.1016/j.diii.2021.06.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 06/21/2021] [Accepted: 06/23/2021] [Indexed: 12/18/2022]
Abstract
PURPOSE The 2020 edition of these Data Challenges was organized by the French Society of Radiology (SFR), from September 28 to September 30, 2020. The goals were to propose innovative artificial intelligence solutions for the current relevant problems in radiology and to build a large database of multimodal medical images of ultrasound and computed tomography (CT) on these subjects from several French radiology centers. MATERIALS AND METHODS This year the attempt was to create data challenge objectives in line with the clinical routine of radiologists, with less preprocessing of data and annotation, leaving a large part of the preprocessing task to the participating teams. The objectives were proposed by the different organizations depending on their core areas of expertise. A dedicated platform was used to upload the medical image data, to automatically anonymize the uploaded data. RESULTS Three challenges were proposed including classification of benign or malignant breast nodules on ultrasound examinations, detection and contouring of pathological neck lymph nodes from cervical CT examinations and classification of calcium score on coronary calcifications from thoracic CT examinations. A total of 2076 medical examinations were included in the database for the three challenges, in three months, by 18 different centers, of which 12% were excluded. The 39 participants were divided into six multidisciplinary teams among which the coronary calcification score challenge was solved with a concordance index > 95%, and the other two with scores of 67% (breast nodule classification) and 63% (neck lymph node calcifications).
Collapse
Affiliation(s)
- Nathalie Lassau
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France.
| | - Imad Bousaid
- Direction de la Transformation Numérique et des Systèmes d'Information, Institut Gustave Roussy, 94800 Villejuif, France
| | | | - Antoine Verdon
- Direction de la Transformation Numérique et des Systèmes d'Information, Institut Gustave Roussy, 94800 Villejuif, France
| | - Corinne Balleyguier
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France
| | - François Bidault
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France
| | - Elie Mousseaux
- Unité Fonctionnelle d'Imagerie Cardiovasculaire Non Invasive, Hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France
| | - Sana Harguem-Zayani
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France
| | - Loic Gaillandre
- Centre Libéral d'Imagerie Médicale Agglomération Lille, 59800 Lille, France
| | - Zoubir Bensalah
- Department of Radiology, Centre Hospitalier St Jean, 66000 Perpignan, France
| | | | - Michèle Monroc
- Department of Radiology, Clinique Saint Antoine, 76230 Bois-Guillaume, France
| | - Audrey Haquin
- Department of Radiology, Hôpital de la Croix-Rousse - HCL, 69004 Lyon, France
| | - Luc Ceugnart
- Department of Radiology, Centre Oscar Lambret, 59000 Lille, France
| | | | - Mathilde Charlot
- Department of Radiology, Hôpital Lyon Sud - HCL, 69310 Pierre-Bénite, France
| | | | - Tiphaine Fourquet
- Department of Radiology, Centre Hospitalier Universitaire de Lille, 59000 Lille, France
| | - Héloise Dapvril
- Service d'Imagerie de la Femme, Centre Hospitalier de Valenciennes, 59300 Valenciennes, France
| | - Joseph Orabona
- Department of Radiology, Centre Hospitalier de Bastia, 20600 Bastia, France
| | | | - Mickael El Haik
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France; Department of Imaging, Institut Gustave Roussy, 94800 Villejuif, France
| | - Jules Zhang-Yin
- Department of Radiology, Hôpital Tenon, AP-HP, 75020 Paris, France
| | - Marc-Samir Guillot
- Unité Fonctionnelle d'Imagerie Cardiovasculaire Non Invasive, Hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France
| | - Mickaël Ohana
- Department of Radiology, Centre Hospitalier Universitaire de Strasbourg, 67200 Strasbourg, France
| | - Thomas Caramella
- Department of Radiology, Institut Arnault Tzanck, 06700 Saint-Laurent du Var, France
| | - Yann Diascorn
- Department of Radiology, Institut Arnault Tzanck, 06700 Saint-Laurent du Var, France
| | | | - Philippe Cuingnet
- Department of Radiology, Centre Hospitalier de Douai, 59507 Douai, France
| | - Umit Gencer
- Unité Fonctionnelle d'Imagerie Cardiovasculaire Non Invasive, Hôpital Européen Georges Pompidou, AP-HP, 75015 Paris, France
| | - Littisha Lawrance
- Laboratoire d'Imagerie Biomédicale Multimodale Paris-Saclay. BIOMAPS, UMR 1281. Université Paris-Saclay, Inserm, CNRS, CEA, 94800 Villejuif, France
| | - Alain Luciani
- Collège des Enseignants de Radiologie de France, 75013 Paris, France; Department of Radiology, Centre Hospitalier Henri Mondor, 94000 Créteil, France
| | - Anne Cotten
- Musculoskeletal Imaging Department, Lille Regional University Hospital, 59000 Lille, France
| | - Jean-François Meder
- Department of Neuroradiology, Centre Hospitalier Sainte-Anne, 75014 Paris, France; Université de Paris, Faculté de Médecine, 75006 Paris, France
| |
Collapse
|
26
|
Lei YM, Yin M, Yu MH, Yu J, Zeng SE, Lv WZ, Li J, Ye HR, Cui XW, Dietrich CF. Artificial Intelligence in Medical Imaging of the Breast. Front Oncol 2021; 11:600557. [PMID: 34367938 PMCID: PMC8339920 DOI: 10.3389/fonc.2021.600557] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Accepted: 07/07/2021] [Indexed: 12/24/2022] Open
Abstract
Artificial intelligence (AI) has invaded our daily lives, and in the last decade, there have been very promising applications of AI in the field of medicine, including medical imaging, in vitro diagnosis, intelligent rehabilitation, and prognosis. Breast cancer is one of the common malignant tumors in women and seriously threatens women’s physical and mental health. Early screening for breast cancer via mammography, ultrasound and magnetic resonance imaging (MRI) can significantly improve the prognosis of patients. AI has shown excellent performance in image recognition tasks and has been widely studied in breast cancer screening. This paper introduces the background of AI and its application in breast medical imaging (mammography, ultrasound and MRI), such as in the identification, segmentation and classification of lesions; breast density assessment; and breast cancer risk assessment. In addition, we also discuss the challenges and future perspectives of the application of AI in medical imaging of the breast.
Collapse
Affiliation(s)
- Yu-Meng Lei
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Miao Yin
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Mei-Hui Yu
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Jing Yu
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Shu-E Zeng
- Department of Medical Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Wen-Zhi Lv
- Department of Artificial Intelligence, Julei Technology, Wuhan, China
| | - Jun Li
- Department of Medical Ultrasound, The First Affiliated Hospital of Medical College, Shihezi University, Xinjiang, China
| | - Hua-Rong Ye
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Christoph F Dietrich
- Department Allgemeine Innere Medizin (DAIM), Kliniken Beau Site, Salem und Permanence, Bern, Switzerland
| |
Collapse
|
27
|
Meng W, Sun Y, Qian H, Chen X, Yu Q, Abiyasi N, Yan S, Peng H, Zhang H, Zhang X. Computer-Aided Diagnosis Evaluation of the Correlation Between Magnetic Resonance Imaging With Molecular Subtypes in Breast Cancer. Front Oncol 2021; 11:693339. [PMID: 34249745 PMCID: PMC8260834 DOI: 10.3389/fonc.2021.693339] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 05/26/2021] [Indexed: 12/25/2022] Open
Abstract
Background There is a demand for additional alternative methods that can allow the differentiation of the breast tumor into molecular subtypes precisely and conveniently. Purpose The present study aimed to determine suitable optimal classifiers and investigate the general applicability of computer-aided diagnosis (CAD) to associate between the breast cancer molecular subtype and the extracted MR imaging features. Methods We analyzed a total of 264 patients (mean age: 47.9 ± 9.7 years; range: 19–81 years) with 264 masses (mean size: 28.6 ± 15.86 mm; range: 5–91 mm) using a Unet model and Gradient Tree Boosting for segmentation and classification. Results The tumors were segmented clearly by the Unet model automatically. All the extracted features which including the shape features,the texture features of the tumors and the clinical features were input into the classifiers for classification, and the results showed that the GTB classifier is superior to other classifiers, which achieved F1-Score 0.72, AUC 0.81 and score 0.71. Analyzed the different features combinations, we founded that the texture features associated with the clinical features are the optimal features to different the breast cancer subtypes. Conclusion CAD is feasible to differentiate the breast cancer subtypes, automatical segmentation were feasible by Unet model and the extracted texture features from breast MR imaging with the clinical features can be used to help differentiating the molecular subtype. Moreover, in the clinical features, BPE and age characteristics have the best potential for subtype.
Collapse
Affiliation(s)
- Wei Meng
- Department of Radiology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Yunfeng Sun
- Department of Radiology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Haibin Qian
- Department of Radiology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xiaodan Chen
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Qiujie Yu
- Department of Radiology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Nanding Abiyasi
- Department of Pathology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shaolei Yan
- Department of Radiology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Haiyong Peng
- Department of Radiology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Hongxia Zhang
- Department of Radiology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xiushi Zhang
- Department of Radiology, Third Affiliated Hospital of Harbin Medical University, Harbin, China
| |
Collapse
|
28
|
Swiecicki A, Konz N, Buda M, Mazurowski MA. A generative adversarial network-based abnormality detection using only normal images for model training with application to digital breast tomosynthesis. Sci Rep 2021; 11:10276. [PMID: 33986361 PMCID: PMC8119417 DOI: 10.1038/s41598-021-89626-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 04/20/2021] [Indexed: 01/07/2023] Open
Abstract
Deep learning has shown tremendous potential in the task of object detection in images. However, a common challenge with this task is when only a limited number of images containing the object of interest are available. This is a particular issue in cancer screening, such as digital breast tomosynthesis (DBT), where less than 1% of cases contain cancer. In this study, we propose a method to train an inpainting generative adversarial network to be used for cancer detection using only images that do not contain cancer. During inference, we removed a part of the image and used the network to complete the removed part. A significant error in completing an image part was considered an indication that such location is unexpected and thus abnormal. A large dataset of DBT images used in this study was collected at Duke University. It consisted of 19,230 reconstructed volumes from 4348 patients. Cancerous masses and architectural distortions were marked with bounding boxes by radiologists. Our experiments showed that the locations containing cancer were associated with a notably higher completion error than the non-cancer locations (mean error ratio of 2.77). All data used in this study has been made publicly available by the authors.
Collapse
Affiliation(s)
- Albert Swiecicki
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA.
| | - Nicholas Konz
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA
| | - Mateusz Buda
- Department of Radiology, Duke University, Durham, NC, USA
| | - Maciej A Mazurowski
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA.,Department of Radiology, Duke University, Durham, NC, USA
| |
Collapse
|
29
|
Hu Q, Whitney HM, Li H, Ji Y, Liu P, Giger ML. Improved Classification of Benign and Malignant Breast Lesions Using Deep Feature Maximum Intensity Projection MRI in Breast Cancer Diagnosis Using Dynamic Contrast-enhanced MRI. Radiol Artif Intell 2021; 3:e200159. [PMID: 34235439 PMCID: PMC8231792 DOI: 10.1148/ryai.2021200159] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 02/04/2021] [Accepted: 02/09/2021] [Indexed: 04/16/2023]
Abstract
PURPOSE To develop a deep transfer learning method that incorporates four-dimensional (4D) information in dynamic contrast-enhanced (DCE) MRI to classify benign and malignant breast lesions. MATERIALS AND METHODS The retrospective dataset is composed of 1990 distinct lesions (1494 malignant and 496 benign) from 1979 women (mean age, 47 years ± 10). Lesions were split into a training and validation set of 1455 lesions (acquired in 2015-2016) and an independent test set of 535 lesions (acquired in 2017). Features were extracted from a convolutional neural network (CNN), and lesions were classified as benign or malignant using support vector machines. Volumetric information was collapsed into two dimensions by taking the maximum intensity projection (MIP) at the image level or feature level within the CNN architecture. Performances were evaluated using the area under the receiver operating characteristic curve (AUC) as the figure of merit and were compared using the DeLong test. RESULTS The image MIP and feature MIP methods yielded AUCs of 0.91 (95% CI: 0.87, 0.94) and 0.93 (95% CI: 0.91, 0.96), respectively, for the independent test set. The feature MIP method achieved higher performance than the image MIP method (∆AUC 95% CI: 0.003, 0.051; P = .03). CONCLUSION Incorporating 4D information in DCE MRI by MIP of features in deep transfer learning demonstrated superior classification performance compared with using MIP images as input in the task of distinguishing between benign and malignant breast lesions.Keywords: Breast, Computer Aided Diagnosis (CAD), Convolutional Neural Network (CNN), MR-Dynamic Contrast Enhanced, Supervised learning, Support vector machines (SVM), Transfer learning, Volume Analysis © RSNA, 2021.
Collapse
|
30
|
Aggarwal R, Sounderajah V, Martin G, Ting DSW, Karthikesalingam A, King D, Ashrafian H, Darzi A. Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. NPJ Digit Med 2021; 4:65. [PMID: 33828217 PMCID: PMC8027892 DOI: 10.1038/s41746-021-00438-z] [Citation(s) in RCA: 237] [Impact Index Per Article: 79.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/25/2021] [Indexed: 12/19/2022] Open
Abstract
Deep learning (DL) has the potential to transform medical diagnostics. However, the diagnostic accuracy of DL is uncertain. Our aim was to evaluate the diagnostic accuracy of DL algorithms to identify pathology in medical imaging. Searches were conducted in Medline and EMBASE up to January 2020. We identified 11,921 studies, of which 503 were included in the systematic review. Eighty-two studies in ophthalmology, 82 in breast disease and 115 in respiratory disease were included for meta-analysis. Two hundred twenty-four studies in other specialities were included for qualitative review. Peer-reviewed studies that reported on the diagnostic accuracy of DL algorithms to identify pathology using medical imaging were included. Primary outcomes were measures of diagnostic accuracy, study design and reporting standards in the literature. Estimates were pooled using random-effects meta-analysis. In ophthalmology, AUC's ranged between 0.933 and 1 for diagnosing diabetic retinopathy, age-related macular degeneration and glaucoma on retinal fundus photographs and optical coherence tomography. In respiratory imaging, AUC's ranged between 0.864 and 0.937 for diagnosing lung nodules or lung cancer on chest X-ray or CT scan. For breast imaging, AUC's ranged between 0.868 and 0.909 for diagnosing breast cancer on mammogram, ultrasound, MRI and digital breast tomosynthesis. Heterogeneity was high between studies and extensive variation in methodology, terminology and outcome measures was noted. This can lead to an overestimation of the diagnostic accuracy of DL algorithms on medical imaging. There is an immediate need for the development of artificial intelligence-specific EQUATOR guidelines, particularly STARD, in order to provide guidance around key issues in this field.
Collapse
Affiliation(s)
- Ravi Aggarwal
- Institute of Global Health Innovation, Imperial College London, London, UK
| | | | - Guy Martin
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | | | - Dominic King
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Hutan Ashrafian
- Institute of Global Health Innovation, Imperial College London, London, UK.
| | - Ara Darzi
- Institute of Global Health Innovation, Imperial College London, London, UK
| |
Collapse
|
31
|
Eskreis-Winkler S, Onishi N, Pinker K, Reiner JS, Kaplan J, Morris EA, Sutton EJ. Using Deep Learning to Improve Nonsystematic Viewing of Breast Cancer on MRI. JOURNAL OF BREAST IMAGING 2021; 3:201-207. [PMID: 38424820 DOI: 10.1093/jbi/wbaa102] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2020] [Indexed: 03/02/2024]
Abstract
OBJECTIVE To investigate the feasibility of using deep learning to identify tumor-containing axial slices on breast MRI images. METHODS This IRB-approved retrospective study included consecutive patients with operable invasive breast cancer undergoing pretreatment breast MRI between January 1, 2014, and December 31, 2017. Axial tumor-containing slices from the first postcontrast phase were extracted. Each axial image was subdivided into two subimages: one of the ipsilateral cancer-containing breast and one of the contralateral healthy breast. Cases were randomly divided into training, validation, and testing sets. A convolutional neural network was trained to classify subimages into "cancer" and "no cancer" categories. Accuracy, sensitivity, and specificity of the classification system were determined using pathology as the reference standard. A two-reader study was performed to measure the time savings of the deep learning algorithm using descriptive statistics. RESULTS Two hundred and seventy-three patients with unilateral breast cancer met study criteria. On the held-out test set, accuracy of the deep learning system for tumor detection was 92.8% (648/706; 95% confidence interval: 89.7%-93.8%). Sensitivity and specificity were 89.5% and 94.3%, respectively. Readers spent 3 to 45 seconds to scroll to the tumor-containing slices without use of the deep learning algorithm. CONCLUSION In breast MR exams containing breast cancer, deep learning can be used to identify the tumor-containing slices. This technology may be integrated into the picture archiving and communication system to bypass scrolling when viewing stacked images, which can be helpful during nonsystematic image viewing, such as during interdisciplinary tumor board meetings.
Collapse
Affiliation(s)
| | - Natsuko Onishi
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
- University of California, Department of Radiology, San Francisco, CA
| | - Katja Pinker
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| | - Jeffrey S Reiner
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| | - Jennifer Kaplan
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| | - Elizabeth A Morris
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| | - Elizabeth J Sutton
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| |
Collapse
|
32
|
Li J, Wang W, Liao L, Liu X. Analysis of the nonperfused volume ratio of adenomyosis from MRI images based on fewshot learning. Phys Med Biol 2021; 66:045019. [PMID: 33361557 DOI: 10.1088/1361-6560/abd66b] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
The nonperfused volume (NPV) ratio is the key to the success of high intensity focused ultrasound (HIFU) ablation treatment of adenomyosis. However, there are no qualitative interpretation standards for predicting the NPV ratio of adenomyosis using magnetic resonance imaging (MRI) before HIFU ablation treatment, which leading to inter-reader variability. Convolutional neural networks (CNNs) have achieved state-of-the-art performance in the automatic disease diagnosis of MRI. Since the use of HIFU to treat adenomyosis is a novel treatment, there is not enough MRI data to support CNNs. We proposed a novel few-shot learning framework that extends CNNs to predict NPV ratio of HIFU ablation treatment for adenomyosis. We collected a dataset from 208 patients with adenomyosis who underwent MRI examination before and after HIFU treatment. Our proposed method was trained and evaluated by fourfold cross validation. This framework obtained sensitivity of 85.6%, 89.6% and 92.8% at 0.799, 0.980 and 1.180 FPs per patient. In the receiver operating characteristics analysis for NPV ratio of adenomyosis, our proposed method received the area under the curve of 0.8233, 0.8289, 0.8412, 0.8319, 0.7010, 0.7637, 0.8375, 0.8219, 0.8207, 0.9812 for the classifications of NPV ratio interval [0%-10%), [10%-20%), …, [90%-100%], respectively. The present study demonstrated that few-shot learning on NPV ratio prediction of HIFU ablation treatment for adenomyosis may contribute to the selection of eligible patients and the pre-judgment of clinical efficacy.
Collapse
Affiliation(s)
- Jiaqi Li
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, People's Republic of China
| | - Wei Wang
- Department of Ultrasound, Chinese PLA General Hospital, Beijing, People's Republic of China
| | - Lejian Liao
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, People's Republic of China
| | - Xin Liu
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, Beijing, People's Republic of China
| |
Collapse
|
33
|
Zerouaoui H, Idri A. Reviewing Machine Learning and Image Processing Based Decision-Making Systems for Breast Cancer Imaging. J Med Syst 2021; 45:8. [PMID: 33404910 DOI: 10.1007/s10916-020-01689-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 12/01/2020] [Indexed: 01/11/2023]
Abstract
Breast cancer (BC) is the leading cause of death among women worldwide. It affects in general women older than 40 years old. Medical images analysis is one of the most promising research areas since it provides facilities for diagnosis and decision-making of several diseases such as BC. This paper conducts a Structured Literature Review (SLR) of the use of Machine Learning (ML) and Image Processing (IP) techniques to deal with BC imaging. A set of 530 papers published between 2000 and August 2019 were selected and analyzed according to ten criteria: year and publication channel, empirical type, research type, medical task, machine learning techniques, datasets used, validation methods, performance measures and image processing techniques which include image pre-processing, segmentation, feature extraction and feature selection. Results showed that diagnosis was the most used medical task and that Deep Learning techniques (DL) were largely used to perform classification. Furthermore, we found out that classification was the most ML objective investigated followed by prediction and clustering. Most of the selected studies used Mammograms as imaging modalities rather than Ultrasound or Magnetic Resonance Imaging with the use of public or private datasets with MIAS as the most frequently investigated public dataset. As for image processing techniques, the majority of the selected studies pre-process their input images by reducing the noise and normalizing the colors, and some of them use segmentation to extract the region of interest with the thresholding method. For feature extraction, we note that researchers extracted the relevant features using classical feature extraction techniques (e.g. Texture features, Shape features, etc.) or DL techniques (e. g. VGG16, VGG19, ResNet, etc.), and finally few papers used feature selection techniques in particular the filter methods.
Collapse
Affiliation(s)
- Hasnae Zerouaoui
- Modeling, Simulation and Data Analysis, Mohammed VI Polytechnic University, Benguerir, Morocco
| | - Ali Idri
- Modeling, Simulation and Data Analysis, Mohammed VI Polytechnic University, Benguerir, Morocco. .,Software Project Management Research Team, ENSIAS, Mohammed V University in Rabat, Rabat, Morocco.
| |
Collapse
|
34
|
Pathak P, Jalal AS, Rai R. Breast Cancer Image Classification: A Review. Curr Med Imaging 2020; 17:720-740. [PMID: 33371857 DOI: 10.2174/0929867328666201228125208] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 09/23/2020] [Accepted: 10/14/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Breast cancer represents uncontrolled breast cell growth. Breast cancer is the most diagnosed cancer in women worldwide. Early detection of breast cancer improves the chances of survival and increases treatment options. There are various methods for screening breast cancer, such as mammogram, ultrasound, computed tomography and Magnetic Resonance Imaging (MRI). MRI is gaining prominence as an alternative screening tool for early detection and breast cancer diagnosis. Nevertheless, MRI can hardly be examined without the use of a Computer-Aided Diagnosis (CAD) framework, due to the vast amount of data. OBJECTIVE This paper aims to cover the approaches used in the CAD system for the detection of breast cancer. METHODS In this paper, the methods used in CAD systems are categories into two classes: the conventional approach and artificial intelligence (AI) approach. RESULTS The conventional approach covers the basic steps of image processing, such as preprocessing, segmentation, feature extraction and classification. The AI approach covers the various convolutional and deep learning networks used for diagnosis. CONCLUSION This review discusses some of the core concepts used in breast cancer and presents a comprehensive review of efforts in the past to address this problem.
Collapse
Affiliation(s)
- Pooja Pathak
- Department of Mathematics, GLA University, Mathura, India
| | - Anand Singh Jalal
- Department of Computer Engineering & Applications, GLA University, Mathura, India
| | - Ritu Rai
- Department of Computer Engineering & Applications, GLA University, Mathura, India
| |
Collapse
|
35
|
Artificial intelligence to predict clinical disability in patients with multiple sclerosis using FLAIR MRI. Diagn Interv Imaging 2020; 101:795-802. [DOI: 10.1016/j.diii.2020.05.009] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 05/18/2020] [Accepted: 05/20/2020] [Indexed: 02/06/2023]
|
36
|
Chassagnon G, Dohan A. Artificial intelligence: from challenges to clinical implementation. Diagn Interv Imaging 2020; 101:763-764. [DOI: 10.1016/j.diii.2020.10.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
37
|
Lassau N, Bousaid I, Chouzenoux E, Lamarque J, Charmettant B, Azoulay M, Cotton F, Khalil A, Lucidarme O, Pigneur F, Benaceur Y, Sadate A, Lederlin M, Laurent F, Chassagnon G, Ernst O, Ferreti G, Diascorn Y, Brillet P, Creze M, Cassagnes L, Caramella C, Loubet A, Dallongeville A, Abassebay N, Ohana M, Banaste N, Cadi M, Behr J, Boussel L, Fournier L, Zins M, Beregi J, Luciani A, Cotten A, Meder J. Three artificial intelligence data challenges based on CT and MRI. Diagn Interv Imaging 2020; 101:783-788. [DOI: 10.1016/j.diii.2020.03.006] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2020] [Accepted: 03/12/2020] [Indexed: 02/07/2023]
|
38
|
Min H, McClymont D, Chandra SS, Crozier S, Bradley AP. Automatic lesion detection, segmentation and characterization via 3D multiscale morphological sifting in breast MRI. Biomed Phys Eng Express 2020; 6. [PMID: 35045404 DOI: 10.1088/2057-1976/abc45c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 10/23/2020] [Indexed: 11/11/2022]
Abstract
Previous studies on computer aided detection/diagnosis (CAD) in 4D breast magnetic resonance imaging (MRI) usually regard lesion detection, segmentation and characterization as separate tasks, and typically require users to manually select 2D MRI slices or regions of interest as the input. In this work, we present a breast MRI CAD system that can handle 4D multimodal breast MRI data, and integrate lesion detection, segmentation and characterization with no user intervention. The proposed CAD system consists of three major stages: region candidate generation, feature extraction and region candidate classification. Breast lesions are firstly extracted as region candidates using the novel 3D multiscale morphological sifting (MMS). The 3D MMS, which uses linear structuring elements to extract lesion-like patterns, can segment lesions from breast images accurately and efficiently. Analytical features are then extracted from all available 4D multimodal breast MRI sequences, including T1-, T2-weighted and DCE sequences, to represent the signal intensity, texture, morphological and enhancement kinetic characteristics of the region candidates. The region candidates are lastly classified as lesion or normal tissue by the random under-sampling boost (RUSboost), and as malignant or benign lesion by the random forest. Evaluated on a breast MRI dataset which contains a total of 117 cases with 141 biopsy-proven lesions (95 malignant and 46 benign lesions), the proposed system achieves a true positive rate (TPR) of 0.90 at 3.19 false positives per patient (FPP) for lesion detection and a TPR of 0.91 at a FPP of 2.95 for identifying malignant lesions without any user intervention. The average dice similarity index (DSI) is0.72±0.15for lesion segmentation. Compared with previously proposed lesion detection, detection-segmentation and detection-characterization systems evaluated on the same breast MRI dataset, the proposed CAD system achieves a favourable performance in breast lesion detection and characterization.
Collapse
Affiliation(s)
- Hang Min
- School of Information Technology and Electrical Engineering, University of Queensland, Australia
| | - Darryl McClymont
- School of Information Technology and Electrical Engineering, University of Queensland, Australia
| | - Shekhar S Chandra
- School of Information Technology and Electrical Engineering, University of Queensland, Australia
| | - Stuart Crozier
- School of Information Technology and Electrical Engineering, University of Queensland, Australia
| | - Andrew P Bradley
- Science and Engineering Faculty, Queensland University of Technology, Australia
| |
Collapse
|
39
|
Gampala S, Vankeshwaram V, Gadula SSP. Is Artificial Intelligence the New Friend for Radiologists? A Review Article. Cureus 2020; 12:e11137. [PMID: 33240726 PMCID: PMC7682942 DOI: 10.7759/cureus.11137] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
Artificial intelligence (AI) is a path-breaking advancement for many industries, including the health care sector. The expeditious development of information technology and data processing has led to the formation of recent tools known as artificial intelligence. Radiology has been a portal for medical technological advancements, and AI will likely be no dissimilar. Radiology is the platform for many technological advances in the medical field; AI can undoubtedly impact every step of a radiologist's workflow. AI can simplify every activity like ordering and scheduling, protocoling and acquisition, image interpretation, reporting, communication, and billing. AI has eminent potential to augment efficiency and accuracy throughout radiology, but it also possesses inherent drawbacks and biases. We collected studies that were published in the past five years using PubMed as our database. We chose studies that were relevant to artificial intelligence in radiology. We mainly focused on the overview of AI in radiology, components included in the functioning of AI, AI assisting in the radiologists' workflow, ethical aspects of AI, challenges, and biases that AI experiencing together with some clinical applications of AI. Of all 33 studies, we found 15 articles discussed the overview and components of AI, five articles about AI affecting radiologist's workflow, five articles related to challenges and biases in AI, two articles discussed ethical aspects of AI, and six articles about practical implications of AI. We found out that the application of AI could make time-dependent tasks that can be performed effortlessly, permitting radiologists more time and opportunities to engage in patient care via increased time for consultation and development in imaging and extracting useful data from those images. AI could only be an aid to radiologists but will not replace a radiologist. Radiologists who use AI to their benefit, rather than to avoid it out of fear, might supersede those radiologists who do not. Substantial research should be done regarding the practical implications of AI algorithms for residents curriculum and the benefits of AI in radiology.
Collapse
|
40
|
Fujioka T, Yashima Y, Oyama J, Mori M, Kubota K, Katsuta L, Kimura K, Yamaga E, Oda G, Nakagawa T, Kitazume Y, Tateishi U. Deep-learning approach with convolutional neural network for classification of maximum intensity projections of dynamic contrast-enhanced breast magnetic resonance imaging. Magn Reson Imaging 2020; 75:1-8. [PMID: 33045323 DOI: 10.1016/j.mri.2020.10.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 08/27/2020] [Accepted: 10/06/2020] [Indexed: 02/05/2023]
Abstract
PURPOSE We aimed to evaluate deep learning approach with convolutional neural networks (CNNs) to discriminate between benign and malignant lesions on maximum intensity projections of dynamic contrast-enhanced breast magnetic resonance imaging (MRI). METHODS We retrospectively gathered maximum intensity projections of dynamic contrast-enhanced breast MRI of 106 benign (including 22 normal) and 180 malignant cases for training and validation data. CNN models were constructed to calculate the probability of malignancy using CNN architectures (DenseNet121, DenseNet169, InceptionResNetV2, InceptionV3, NasNetMobile, and Xception) with 500 epochs and analyzed that of 25 benign (including 12 normal) and 47 malignant cases for test data. Two human readers also interpreted these test data and scored the probability of malignancy for each case using Breast Imaging Reporting and Data System. Sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were calculated. RESULTS The CNN models showed a mean AUC of 0.830 (range, 0.750-0.895). The best model was InceptionResNetV2. This model, Reader 1, and Reader 2 had sensitivities of 74.5%, 72.3%, and 78.7%; specificities of 96.0%, 88.0%, and 80.0%; and AUCs of 0.895, 0.823, and 0.849, respectively. No significant difference arose between the CNN models and human readers (p > 0.125). CONCLUSION Our CNN models showed comparable diagnostic performance in differentiating between benign and malignant lesions to human readers on maximum intensity projection of dynamic contrast-enhanced breast MRI.
Collapse
Affiliation(s)
- Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yuka Yashima
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Jun Oyama
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Mio Mori
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan.
| | - Kazunori Kubota
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan; Department of Radiology, Dokkyo Medical University, Tochigi, Japan
| | - Leona Katsuta
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Koichiro Kimura
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Emi Yamaga
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Goshi Oda
- Department of Surgery, Breast Surgery, Tokyo Medical and Dental University, Tokyo, Japan
| | - Tsuyoshi Nakagawa
- Department of Surgery, Breast Surgery, Tokyo Medical and Dental University, Tokyo, Japan
| | - Yoshio Kitazume
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Ukihide Tateishi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| |
Collapse
|
41
|
Abstract
Screening for breast cancer reduces breast cancer-related mortality and earlier detection facilitates less aggressive treatment. Unfortunately, current screening modalities are imperfect, suffering from limited sensitivity and high false-positive rates. Novel techniques in the field of breast imaging may soon play a role in breast cancer screening: digital breast tomosynthesis, contrast material-enhanced spectral mammography, US (automated three-dimensional breast US, transmission tomography, elastography, optoacoustic imaging), MRI (abbreviated and ultrafast, diffusion-weighted imaging), and molecular breast imaging. Artificial intelligence and radiomics have the potential to further improve screening strategies. Furthermore, nonimaging-based screening tests such as liquid biopsy and breathing tests may transform the screening landscape. © RSNA, 2020 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Ritse M Mann
- From the Department of Radiology, Nuclear Medicine and Anatomy, Radboud University Medical Center, Geert Grooteplein 10, PO Box 9101, 6500 HB, Nijmegen, the Netherlands (R.M.M.); Department of Radiology, the Netherlands Cancer Institute, Amsterdam, the Netherlands (R.M.M.); Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Conn (R.H.); Department of Radiology, Northeastern Ohio Medical University, Rootstown, Ohio (R.G.B.); Southwoods Imaging, Youngstown, Ohio (R.G.B.); Department of Radiology, New York University Langone School of Medicine, New York, NY (L.M.); and Department of Radiology, New York University Grossman School of Medicine, Center for Advanced Imaging Innovation and Research, Laura and Isaac Perlmutter Cancer Center, New York, NY (L.M.)
| | - Regina Hooley
- From the Department of Radiology, Nuclear Medicine and Anatomy, Radboud University Medical Center, Geert Grooteplein 10, PO Box 9101, 6500 HB, Nijmegen, the Netherlands (R.M.M.); Department of Radiology, the Netherlands Cancer Institute, Amsterdam, the Netherlands (R.M.M.); Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Conn (R.H.); Department of Radiology, Northeastern Ohio Medical University, Rootstown, Ohio (R.G.B.); Southwoods Imaging, Youngstown, Ohio (R.G.B.); Department of Radiology, New York University Langone School of Medicine, New York, NY (L.M.); and Department of Radiology, New York University Grossman School of Medicine, Center for Advanced Imaging Innovation and Research, Laura and Isaac Perlmutter Cancer Center, New York, NY (L.M.)
| | - Richard G Barr
- From the Department of Radiology, Nuclear Medicine and Anatomy, Radboud University Medical Center, Geert Grooteplein 10, PO Box 9101, 6500 HB, Nijmegen, the Netherlands (R.M.M.); Department of Radiology, the Netherlands Cancer Institute, Amsterdam, the Netherlands (R.M.M.); Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Conn (R.H.); Department of Radiology, Northeastern Ohio Medical University, Rootstown, Ohio (R.G.B.); Southwoods Imaging, Youngstown, Ohio (R.G.B.); Department of Radiology, New York University Langone School of Medicine, New York, NY (L.M.); and Department of Radiology, New York University Grossman School of Medicine, Center for Advanced Imaging Innovation and Research, Laura and Isaac Perlmutter Cancer Center, New York, NY (L.M.)
| | - Linda Moy
- From the Department of Radiology, Nuclear Medicine and Anatomy, Radboud University Medical Center, Geert Grooteplein 10, PO Box 9101, 6500 HB, Nijmegen, the Netherlands (R.M.M.); Department of Radiology, the Netherlands Cancer Institute, Amsterdam, the Netherlands (R.M.M.); Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Conn (R.H.); Department of Radiology, Northeastern Ohio Medical University, Rootstown, Ohio (R.G.B.); Southwoods Imaging, Youngstown, Ohio (R.G.B.); Department of Radiology, New York University Langone School of Medicine, New York, NY (L.M.); and Department of Radiology, New York University Grossman School of Medicine, Center for Advanced Imaging Innovation and Research, Laura and Isaac Perlmutter Cancer Center, New York, NY (L.M.)
| |
Collapse
|
42
|
Meyer-Bäse A, Morra L, Meyer-Bäse U, Pinker K. Current Status and Future Perspectives of Artificial Intelligence in Magnetic Resonance Breast Imaging. CONTRAST MEDIA & MOLECULAR IMAGING 2020; 2020:6805710. [PMID: 32934610 PMCID: PMC7474774 DOI: 10.1155/2020/6805710] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 04/17/2020] [Accepted: 05/28/2020] [Indexed: 12/12/2022]
Abstract
Recent advances in artificial intelligence (AI) and deep learning (DL) have impacted many scientific fields including biomedical maging. Magnetic resonance imaging (MRI) is a well-established method in breast imaging with several indications including screening, staging, and therapy monitoring. The rapid development and subsequent implementation of AI into clinical breast MRI has the potential to affect clinical decision-making, guide treatment selection, and improve patient outcomes. The goal of this review is to provide a comprehensive picture of the current status and future perspectives of AI in breast MRI. We will review DL applications and compare them to standard data-driven techniques. We will emphasize the important aspect of developing quantitative imaging biomarkers for precision medicine and the potential of breast MRI and DL in this context. Finally, we will discuss future challenges of DL applications for breast MRI and an AI-augmented clinical decision strategy.
Collapse
Affiliation(s)
- Anke Meyer-Bäse
- Department of Scientific Computing, Florida State University, Tallahassee, Florida 32310-4120, USA
| | - Lia Morra
- Dipartimento di Automatica e Informatica, Politecnico di Torino, Torino, Italy
| | - Uwe Meyer-Bäse
- Department of Electrical and Computer Engineering, Florida A&M University and Florida State University, Tallahassee, Florida 32310-4120, USA
| | - Katja Pinker
- Department of Biomedical Imaging and Image-Guided Therapy, Division of Molecular and Gender Imaging, Medical University of Vienna, Vienna, Austria
- Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, New York 10065, USA
| |
Collapse
|
43
|
Rella R, Bufi E, Belli P, Petta F, Serra T, Masiello V, Scrofani AR, Barone R, Orlandi A, Valentini V, Manfredi R. Association between background parenchymal enhancement and tumor response in patients with breast cancer receiving neoadjuvant chemotherapy. Diagn Interv Imaging 2020; 101:649-655. [PMID: 32654985 DOI: 10.1016/j.diii.2020.05.010] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Revised: 05/21/2020] [Accepted: 05/27/2020] [Indexed: 12/12/2022]
Abstract
PURPOSE To analyze the relationships between background parenchymal enhancement (BPE) of the contralateral healthy breast and tumor response after neoadjuvant chemotherapy (NAC) in women with breast cancer. MATERIALS AND METHODS A total of 228 women (mean age, 47.6 years±10 [SD]; range: 24-74 years) with invasive breast cancer who underwent NAC were included. All patients underwent breast magnetic resonance imaging (MRI) before and after NAC and 127 patients underwent MRI before, during (after the 4th cycle of NAC) and after NAC. Quantitative semi-automated analysis of BPE of the contralateral healthy breast was performed. Enhancement level on baseline MRI (baseline BPE) and MRI after chemotherapy (final BPE), change in enhancement rate between baseline MRI and final MRI (total BPE change) and between baseline MRI and midline MRI (early BPE change) were recorded. Associations between BPE and tumor response, menopausal status, tumor phenotype, NAC type and tumor stage at diagnosis were searched for. Pathologic complete response (pCR) was defined as the absence of residual invasive cancer cells in the breast and ipsilateral lymph nodes. RESULTS No differences were found in baseline BPE, final BPE, early and total BPE changes between pCR and non-pCR groups. Early BPE change was higher in non-pCR group in patients with stages 3 and 4 breast cancers (P=0.019) and in human epidermal growth factor receptor 2 (HER2)-negative patients (P=0.020). CONCLUSION Early reduction of BPE in the contralateral breast during NAC may be an early predictor of loss of tumor response, showing potential as an imaging biomarker of treatment response, especially in women with stages 3 or 4 breast cancers and in HER2 - negative breast cancers.
Collapse
Affiliation(s)
- R Rella
- UOC di Diagnostica per Immagini ed Interventistica Generale, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy
| | - E Bufi
- UOC di Diagnostica per Immagini ed Interventistica Generale, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy.
| | - P Belli
- UOC di Diagnostica per Immagini ed Interventistica Generale, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy; Università Cattolica Sacro Cuore, 00168 Rome, Italy
| | - F Petta
- UOC di Diagnostica per Immagini ed Interventistica Generale, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy; Università Cattolica Sacro Cuore, 00168 Rome, Italy
| | - T Serra
- UOC di Diagnostica per Immagini ed Interventistica Generale, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy; Università Cattolica Sacro Cuore, 00168 Rome, Italy
| | - V Masiello
- UOC di Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy
| | - A R Scrofani
- UOC di Diagnostica per Immagini ed Interventistica Generale, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy; Università Cattolica Sacro Cuore, 00168 Rome, Italy
| | - R Barone
- UOC di Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy
| | - A Orlandi
- U.O.C Oncologia Medica, Dipartimento di Scienze Gastroenterologiche, Endocrino-Metaboliche e Nefro-Urologiche, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, 00168 Rome, Italy
| | - V Valentini
- UOC di Radioterapia Oncologica, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy
| | - R Manfredi
- UOC di Diagnostica per Immagini ed Interventistica Generale, Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Fondazione Policlinico Universitario A. Gemelli IRCCS, 00168 Rome, Italy; Università Cattolica Sacro Cuore, 00168 Rome, Italy
| |
Collapse
|
44
|
Detection and Diagnosis of Breast Cancer Using Artificial Intelligence Based assessment of Maximum Intensity Projection Dynamic Contrast-Enhanced Magnetic Resonance Images. Diagnostics (Basel) 2020; 10:diagnostics10050330. [PMID: 32443922 PMCID: PMC7277981 DOI: 10.3390/diagnostics10050330] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 05/15/2020] [Accepted: 05/18/2020] [Indexed: 12/24/2022] Open
Abstract
We aimed to evaluate an artificial intelligence (AI) system that can detect and diagnose lesions of maximum intensity projection (MIP) in dynamic contrast-enhanced (DCE) breast magnetic resonance imaging (MRI). We retrospectively gathered MIPs of DCE breast MRI for training and validation data from 30 and 7 normal individuals, 49 and 20 benign cases, and 135 and 45 malignant cases, respectively. Breast lesions were indicated with a bounding box and labeled as benign or malignant by a radiologist, while the AI system was trained to detect and calculate possibilities of malignancy using RetinaNet. The AI system was analyzed using test sets of 13 normal, 20 benign, and 52 malignant cases. Four human readers also scored these test data with and without the assistance of the AI system for the possibility of a malignancy in each breast. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were 0.926, 0.828, and 0.925 for the AI system; 0.847, 0.841, and 0.884 for human readers without AI; and 0.889, 0.823, and 0.899 for human readers with AI using a cutoff value of 2%, respectively. The AI system showed better diagnostic performance compared to the human readers (p = 0.002), and because of the increased performance of human readers with the assistance of the AI system, the AUC of human readers was significantly higher with than without the AI system (p = 0.039). Our AI system showed a high performance ability in detecting and diagnosing lesions in MIPs of DCE breast MRI and increased the diagnostic performance of human readers.
Collapse
|
45
|
Cuocolo R, Caruso M, Perillo T, Ugga L, Petretta M. Machine Learning in oncology: A clinical appraisal. Cancer Lett 2020; 481:55-62. [PMID: 32251707 DOI: 10.1016/j.canlet.2020.03.032] [Citation(s) in RCA: 84] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Revised: 03/11/2020] [Accepted: 03/31/2020] [Indexed: 02/07/2023]
Abstract
Machine learning (ML) is a branch of artificial intelligence centered on algorithms which do not need explicit prior programming to function but automatically learn from available data, creating decision models to complete tasks. ML-based tools have numerous promising applications in several fields of medicine. Its use has grown following the increased availability of patient data due to technological advances such as digital health records and high-volume information extraction from medical images. Multiple ML algorithms have been proposed for applications in oncology. For instance, they have been employed for oncological risk assessment, automated segmentation, lesion detection, characterization, grading and staging, prediction of prognosis and therapy response. In the near future, ML could become essential part of every step of oncological screening strategies and patients' management thus leading to precision medicine.
Collapse
Affiliation(s)
- Renato Cuocolo
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini 5, 80131, Naples, Italy
| | - Martina Caruso
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini 5, 80131, Naples, Italy
| | - Teresa Perillo
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini 5, 80131, Naples, Italy.
| | - Lorenzo Ugga
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini 5, 80131, Naples, Italy
| | - Mario Petretta
- Department of Translational Medical Sciences, University of Naples "Federico II", Via S. Pansini 5, 80131, Naples, Italy
| |
Collapse
|
46
|
|
47
|
Gao X, Wang X. Performance of deep learning for differentiating pancreatic diseases on contrast-enhanced magnetic resonance imaging: A preliminary study. Diagn Interv Imaging 2019; 101:91-100. [PMID: 31375430 DOI: 10.1016/j.diii.2019.07.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2019] [Revised: 07/08/2019] [Accepted: 07/09/2019] [Indexed: 02/08/2023]
Abstract
PURPOSE The purpose of this study was to evaluate the ability of deep learning to differentiate pancreatic diseases on contrast-enhanced magnetic resonance (MR) images with the aid of generative adversarial network (GAN). MATERIALS AND METHODS A total of 504 patients who underwent T1-weighted contrast-enhanced MR examinations before any treatments were included in this retrospective study. First, the MRI examinations of 398 patients (215 men, 183 women; mean age, 59.14±12.07 [SD] years [range: 16-85 years]) from one hospital were used as the training set. Then the MRI examinations of 50 (26 men, 24women; mean age, 58.58±13.64 [SD] years [range: 24-85 years]) and 56 (30 men, 26 women; mean age, 59.13±11.35 [SD] years [range: 26-80 years]) consecutive patients from two hospitals were separately collected as the internal and external validation sets. An InceptionV4 network was trained on the training set augmented by synthetic images from GANs. Classification performance of trained InceptionV4 network for every patch and every patient were made on both validation sets, respectively. The prediction agreement between convolutional neural network (CNN) and radiologist was measured by the Cohen's kappa coefficient. RESULTS The patch-level average accuracy and the micro-averaging area under receiver operating characteristic curve (AUC) of InceptionV4 network were 71.56% and 0.9204 (95% confidence interval [CI]: 0.9165-0.9308) for the internal validation set, and 79.46% and 0.9451 (95%CI: 0.9320-0.9523) for the external validation set, respectively. The patient-level average accuracy and the micro-averaging AUC of InceptionV4 network were 70.00% and 0.8250 (95%CI: 0.8147-0.8326) for the internal validation, 76.79% and 0.8646 (95%CI: 0.8489-0.8772) for the external validation set, respectively. Evaluated by human reader, the average accuracy and micro-averaging AUC for internal and external validation sets were 82.00% and 0.8950 (95%CI: 0.8817-0.9083), 83.93% and 0.9063 (95%CI: 0.8968-0.9212), respectively. The Cohen's kappa coefficients between InceptionV4 network and human reader for the internal and external invalidation sets were 0.8339 (95%CI: 0.6991-0.9447) and 0.8862 (95%CI: 0.7759-0.9738), respectively. CONCLUSION Deep learning using CNN and GAN had the potential to differentiate pancreatic diseases on contrast-enhanced MR images.
Collapse
Affiliation(s)
- X Gao
- Shanghai Institute of Medical Imaging, 200032 Shanghai, China; Department of Interventional Radiology, Fudan University Zhongshan Hospital, 200032 Shanghai, China
| | - X Wang
- Shanghai Institute of Medical Imaging, 200032 Shanghai, China; Department of Interventional Radiology, Fudan University Zhongshan Hospital, 200032 Shanghai, China.
| |
Collapse
|
48
|
Reig B, Heacock L, Geras KJ, Moy L. Machine learning in breast MRI. J Magn Reson Imaging 2019; 52:998-1018. [PMID: 31276247 DOI: 10.1002/jmri.26852] [Citation(s) in RCA: 85] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Revised: 06/18/2019] [Accepted: 06/19/2019] [Indexed: 12/13/2022] Open
Abstract
Machine-learning techniques have led to remarkable advances in data extraction and analysis of medical imaging. Applications of machine learning to breast MRI continue to expand rapidly as increasingly accurate 3D breast and lesion segmentation allows the combination of radiologist-level interpretation (eg, BI-RADS lexicon), data from advanced multiparametric imaging techniques, and patient-level data such as genetic risk markers. Advances in breast MRI feature extraction have led to rapid dataset analysis, which offers promise in large pooled multiinstitutional data analysis. The object of this review is to provide an overview of machine-learning and deep-learning techniques for breast MRI, including supervised and unsupervised methods, anatomic breast segmentation, and lesion segmentation. Finally, it explores the role of machine learning, current limitations, and future applications to texture analysis, radiomics, and radiogenomics. Level of Evidence: 3 Technical Efficacy Stage: 2 J. Magn. Reson. Imaging 2019. J. Magn. Reson. Imaging 2020;52:998-1018.
Collapse
Affiliation(s)
- Beatriu Reig
- The Department of Radiology, New York University School of Medicine, New York, New York, USA
| | - Laura Heacock
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, New York, USA
| | - Krzysztof J Geras
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, New York, USA
| | - Linda Moy
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, New York, USA.,Center for Advanced Imaging Innovation and Research (CAI2 R), New York University School of Medicine, New York, New York, USA
| |
Collapse
|