1
|
Mathur A, Arya N, Pasupa K, Saha S, Roy Dey S, Saha S. Breast cancer prognosis through the use of multi-modal classifiers: current state of the art and the way forward. Brief Funct Genomics 2024:elae015. [PMID: 38688724 DOI: 10.1093/bfgp/elae015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 03/01/2024] [Accepted: 04/09/2024] [Indexed: 05/02/2024] Open
Abstract
We present a survey of the current state-of-the-art in breast cancer detection and prognosis. We analyze the evolution of Artificial Intelligence-based approaches from using just uni-modal information to multi-modality for detection and how such paradigm shift facilitates the efficacy of detection, consistent with clinical observations. We conclude that interpretable AI-based predictions and ability to handle class imbalance should be considered priority.
Collapse
Affiliation(s)
- Archana Mathur
- Department of Information Science and Engineering, Nitte Meenakshi Institute of Technology, Yelahanka, 560064, Karnataka, India
| | - Nikhilanand Arya
- School of Computer Engineering, Kalinga Institute of Industrial Technology, Deemed to be University, Bhubaneshwar, 751024, Odisha, India
| | - Kitsuchart Pasupa
- School of Information Technology, King Mongkut's Institute of Technology Ladkrabang, 1 Soi Chalongkrung 1, 10520, Bangkok, Thailand
| | - Sriparna Saha
- Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801106, Bihar, India
| | - Sudeepa Roy Dey
- Department of Computer Science and Engineering, PES University, Hosur Road, 560100, Karnataka, India
| | - Snehanshu Saha
- CSIS and APPCAIR, BITS Pilani K.K Birla Goa Campus, Goa, 403726, Goa, India
- Div of AI Research, HappyMonk AI, Bangalore, 560078, Karnataka, India
| |
Collapse
|
2
|
Farooq S, Del-Valle M, Dos Santos SN, Bernardes ES, Zezell DM. Recognition of breast cancer subtypes using FTIR hyperspectral data. SPECTROCHIMICA ACTA. PART A, MOLECULAR AND BIOMOLECULAR SPECTROSCOPY 2024; 310:123941. [PMID: 38290283 DOI: 10.1016/j.saa.2024.123941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 12/22/2023] [Accepted: 01/20/2024] [Indexed: 02/01/2024]
Abstract
Fourier-transform infrared spectroscopy (FTIR) is a powerful, non-destructive, highly sensitive and a promising analytical technique to provide spectrochemical signatures of biological samples, where markers like carbohydrates, proteins, and phosphate groups of DNA can be recognized in biological micro-environment. However, method of measurements of large cells need an excessive time to achieve high quality images, making its clinical use difficult due to speed of data-acquisition and lack of optimized computational procedures. To address such challenges, Machine Learning (ML) based technologies can assist to assess an accurate prognostication of breast cancer (BC) subtypes with high performance. Here, we applied FTIR spectroscopy to identify breast cancer subtypes in order to differentiate between luminal (BT474) and non-luminal (SKBR3) molecular subtypes. For this reason, we tested multivariate classification technique to extract feature information employing three-dimension (3D)-discriminant analysis approach based on 3D-principle component analysis-linear discriminant analysis (3D-PCA-LDA) and 3D-principal component analysis-quadratic discriminant analysis (3D-PCA-QDA), showing an improvement in sensitivity (98%), specificity (94%) and accuracy (98%) parameters compared to conventional unfolded methods. Our results evidence that 3D-PCA-LDA and 3D-PCA-QDA are potential tools for discriminant analysis of hyperspectral dataset to obtain superior classification assessment.
Collapse
Affiliation(s)
- Sajid Farooq
- Center for Lasers and Applications, Instituto de Pesquisas Energeticas e Nucleares, IPEN-CNEN, Address One, Sao Paulo, 05508-000, Sao Paulo, Brazil
| | - Matheus Del-Valle
- Center for Lasers and Applications, Instituto de Pesquisas Energeticas e Nucleares, IPEN-CNEN, Address One, Sao Paulo, 05508-000, Sao Paulo, Brazil
| | - Sofia Nascimento Dos Santos
- Center for Radiopharmaceutics, Instituto de Pesquisas Energeticas e Nucleares, IPEN-CNEN, Address One, Sao Paulo, 05508-000, Sao Paulo, Brazil
| | - Emerson Soares Bernardes
- Center for Radiopharmaceutics, Instituto de Pesquisas Energeticas e Nucleares, IPEN-CNEN, Address One, Sao Paulo, 05508-000, Sao Paulo, Brazil
| | - Denise Maria Zezell
- Center for Lasers and Applications, Instituto de Pesquisas Energeticas e Nucleares, IPEN-CNEN, Address One, Sao Paulo, 05508-000, Sao Paulo, Brazil.
| |
Collapse
|
3
|
Du Y, Wang D, Liu M, Zhang X, Ren W, Sun J, Yin C, Yang S, Zhang L. Study on the differential diagnosis of benign and malignant breast lesions using a deep learning model based on multimodal images. J Cancer Res Ther 2024; 20:625-632. [PMID: 38687933 DOI: 10.4103/jcrt.jcrt_1796_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 12/01/2023] [Indexed: 05/02/2024]
Abstract
OBJECTIVE To establish a multimodal model for distinguishing benign and malignant breast lesions. MATERIALS AND METHODS Clinical data, mammography, and MRI images (including T2WI, diffusion-weighted images (DWI), apparent diffusion coefficient (ADC), and DCE-MRI images) of 132 benign and breast cancer patients were analyzed retrospectively. The region of interest (ROI) in each image was marked and segmented using MATLAB software. The mammography, T2WI, DWI, ADC, and DCE-MRI models based on the ResNet34 network were trained. Using an integrated learning method, the five models were used as a basic model, and voting methods were used to construct a multimodal model. The dataset was divided into a training set and a prediction set. The accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of the model were calculated. The diagnostic efficacy of each model was analyzed using a receiver operating characteristic curve (ROC) and an area under the curve (AUC). The diagnostic value was determined by the DeLong test with statistically significant differences set at P < 0.05. RESULTS We evaluated the ability of the model to classify benign and malignant tumors using the test set. The AUC values of the multimodal model, mammography model, T2WI model, DWI model, ADC model and DCE-MRI model were 0.943, 0.645, 0.595, 0.905, 0.900, and 0.865, respectively. The diagnostic ability of the multimodal model was significantly higher compared with that of the mammography and T2WI models. However, compared with the DWI, ADC, and DCE-MRI models, there was no significant difference in the diagnostic ability of these models. CONCLUSION Our deep learning model based on multimodal image training has practical value for the diagnosis of benign and malignant breast lesions.
Collapse
Affiliation(s)
- Yanan Du
- Department of Health Management, The First Affiliated Hospital of Shandong First Medical University and Qianfoshan Hospital, Jinan City, Shandong Province, China
| | - Dawei Wang
- Department of Health Management Shandong University of Traditional Chinese Medicine, Jinan City, Shandong Province, China
| | - Menghan Liu
- Department of Health Management, The First Affiliated Hospital of Shandong First Medical University and Qianfoshan Hospital, Jinan City, Shandong Province, China
| | - Xiaodong Zhang
- Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan City, Shandong Province, China
| | - Wanqing Ren
- Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan City, Shandong Province, China
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University and Qianfoshan Hospital, Jinan City, Shandong Province, China
| | - Jingxiang Sun
- Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan City, Shandong Province, China
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University and Qianfoshan Hospital, Jinan City, Shandong Province, China
| | - Chao Yin
- Department of Radiology, Yantai Taocun Central Hospital, Yantai City, Shandong Province, China
| | - Shiwei Yang
- Department of Anorectal Surgery, The First Affiliated Hospital of Shandong First Medical University and Qianfoshan Hospital, Jinan City, Shandong Province, China
| | - Li Zhang
- Department of Pharmacology, Jinan Central Hospital Affiliated to Shandong First Medical University, Jinan City, Shandong Province, China
| |
Collapse
|
4
|
Zhou XX, Zhang L, Cui QX, Li H, Sang XQ, Zhang HX, Zhu YM, Kuai ZX. A Channel-Dimensional Feature-Reconstructed Deep Learning Model for Predicting Breast Cancer Molecular Subtypes on Overall b-Value Diffusion-Weighted MRI. J Magn Reson Imaging 2024; 59:1425-1435. [PMID: 37403945 DOI: 10.1002/jmri.28895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 06/23/2023] [Accepted: 06/23/2023] [Indexed: 07/06/2023] Open
Abstract
BACKGROUND Dynamic contrast-enhanced (DCE) MRI commonly outperforms diffusion-weighted (DW) MRI in breast cancer discrimination. However, the side effects of contrast agents limit the use of DCE-MRI, particularly in patients with chronic kidney disease. PURPOSE To develop a novel deep learning model to fully exploit the potential of overall b-value DW-MRI without the need for a contrast agent in predicting breast cancer molecular subtypes and to evaluate its performance in comparison with DCE-MRI. STUDY TYPE Prospective. SUBJECTS 486 female breast cancer patients (training/validation/test: 64%/16%/20%). FIELD STRENGTH/SEQUENCE 3.0 T/DW-MRI (13 b-values) and DCE-MRI (one precontrast and five postcontrast phases). ASSESSMENT The breast cancers were divided into four subtypes: luminal A, luminal B, HER2+, and triple negative. A channel-dimensional feature-reconstructed (CDFR) deep neural network (DNN) was proposed to predict these subtypes using pathological diagnosis as the reference standard. Additionally, a non-CDFR DNN (NCDFR-DNN) was built for comparative purposes. A mixture ensemble DNN (ME-DNN) integrating two CDFR-DNNs was constructed to identify subtypes on multiparametric MRI (MP-MRI) combing DW-MRI and DCE-MRI. STATISTICAL TESTS Model performance was evaluated using accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). Model comparisons were performed using the one-way analysis of variance with least significant difference post hoc test and the DeLong test. P < 0.05 was considered significant. RESULTS The CDFR-DNN (accuracies, 0.79 ~ 0.80; AUCs, 0.93 ~ 0.94) demonstrated significantly improved predictive performance than the NCDFR-DNN (accuracies, 0.76 ~ 0.78; AUCs, 0.92 ~ 0.93) on DW-MRI. Utilizing the CDFR-DNN, DW-MRI attained the predictive performance equal (P = 0.065 ~ 1.000) to DCE-MRI (accuracies, 0.79 ~ 0.80; AUCs, 0.93 ~ 0.95). The predictive performance of the ME-DNN on MP-MRI (accuracies, 0.85 ~ 0.87; AUCs, 0.96 ~ 0.97) was superior to those of both the CDFR-DNN and NCDFR-DNN on either DW-MRI or DCE-MRI. DATA CONCLUSION The CDFR-DNN enabled overall b-value DW-MRI to achieve the predictive performance comparable to DCE-MRI. MP-MRI outperformed DW-MRI and DCE-MRI in subtype prediction. LEVEL OF EVIDENCE 2 TECHNICAL EFFICACY STAGE: 1.
Collapse
Affiliation(s)
- Xin-Xiang Zhou
- Imaging Center, Harbin Medical University Cancer Hospital, Harbin, China
| | - Lan Zhang
- Imaging Center, Harbin Medical University Cancer Hospital, Harbin, China
| | - Quan-Xiang Cui
- Imaging Center, Harbin Medical University Cancer Hospital, Harbin, China
| | - Hui Li
- Imaging Center, Harbin Medical University Cancer Hospital, Harbin, China
| | - Xi-Qiao Sang
- Division of Respiratory Disease, Fourth Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Hong-Xia Zhang
- Imaging Center, Harbin Medical University Cancer Hospital, Harbin, China
| | - Yue-Min Zhu
- CREATIS, CNRS UMR 5220-INSERM U1294-University Lyon 1-INSA Lyon-University Jean Monnet Saint-Etienne, Villeurbanne, France
| | - Zi-Xiang Kuai
- Imaging Center, Harbin Medical University Cancer Hospital, Harbin, China
| |
Collapse
|
5
|
Seth I, Lim B, Joseph K, Gracias D, Xie Y, Ross RJ, Rozen WM. Use of artificial intelligence in breast surgery: a narrative review. Gland Surg 2024; 13:395-411. [PMID: 38601286 PMCID: PMC11002485 DOI: 10.21037/gs-23-414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 02/21/2024] [Indexed: 04/12/2024]
Abstract
Background and Objective We have witnessed tremendous advances in artificial intelligence (AI) technologies. Breast surgery, a subspecialty of general surgery, has notably benefited from AI technologies. This review aims to evaluate how AI has been integrated into breast surgery practices, to assess its effectiveness in improving surgical outcomes and operational efficiency, and to identify potential areas for future research and application. Methods Two authors independently conducted a comprehensive search of PubMed, Google Scholar, EMBASE, and Cochrane CENTRAL databases from January 1, 1950, to September 4, 2023, employing keywords pertinent to AI in conjunction with breast surgery or cancer. The search focused on English language publications, where relevance was determined through meticulous screening of titles, abstracts, and full-texts, followed by an additional review of references within these articles. The review covered a range of studies illustrating the applications of AI in breast surgery encompassing lesion diagnosis to postoperative follow-up. Publications focusing specifically on breast reconstruction were excluded. Key Content and Findings AI models have preoperative, intraoperative, and postoperative applications in the field of breast surgery. Using breast imaging scans and patient data, AI models have been designed to predict the risk of breast cancer and determine the need for breast cancer surgery. In addition, using breast imaging scans and histopathological slides, models were used for detecting, classifying, segmenting, grading, and staging breast tumors. Preoperative applications included patient education and the display of expected aesthetic outcomes. Models were also designed to provide intraoperative assistance for precise tumor resection and margin status assessment. As well, AI was used to predict postoperative complications, survival, and cancer recurrence. Conclusions Extra research is required to move AI models from the experimental stage to actual implementation in healthcare. With the rapid evolution of AI, further applications are expected in the coming years including direct performance of breast surgery. Breast surgeons should be updated with the advances in AI applications in breast surgery to provide the best care for their patients.
Collapse
Affiliation(s)
- Ishith Seth
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
- Central Clinical School at Monash University, The Alfred Centre, Melbourne, Victoria, Australia
| | - Bryan Lim
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
- Central Clinical School at Monash University, The Alfred Centre, Melbourne, Victoria, Australia
| | - Konrad Joseph
- Department of Surgery, Port Macquarie Base Hospital, New South Wales, Australia
| | - Dylan Gracias
- Department of Surgery, Townsville Hospital, Queensland, Australia
| | - Yi Xie
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
| | - Richard J. Ross
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
- Central Clinical School at Monash University, The Alfred Centre, Melbourne, Victoria, Australia
| | - Warren M. Rozen
- Department of Plastic Surgery, Peninsula Health, Melbourne, Victoria, Australia
- Central Clinical School at Monash University, The Alfred Centre, Melbourne, Victoria, Australia
| |
Collapse
|
6
|
Kumar V, Prabha C, Sharma P, Mittal N, Askar SS, Abouhawwash M. Unified deep learning models for enhanced lung cancer prediction with ResNet-50-101 and EfficientNet-B3 using DICOM images. BMC Med Imaging 2024; 24:63. [PMID: 38500083 PMCID: PMC10946139 DOI: 10.1186/s12880-024-01241-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 03/07/2024] [Indexed: 03/20/2024] Open
Abstract
Significant advancements in machine learning algorithms have the potential to aid in the early detection and prevention of cancer, a devastating disease. However, traditional research methods face obstacles, and the amount of cancer-related information is rapidly expanding. The authors have developed a helpful support system using three distinct deep-learning models, ResNet-50, EfficientNet-B3, and ResNet-101, along with transfer learning, to predict lung cancer, thereby contributing to health and reducing the mortality rate associated with this condition. This offer aims to address the issue effectively. Using a dataset of 1,000 DICOM lung cancer images from the LIDC-IDRI repository, each image is classified into four different categories. Although deep learning is still making progress in its ability to analyze and understand cancer data, this research marks a significant step forward in the fight against cancer, promoting better health outcomes and potentially lowering the mortality rate. The Fusion Model, like all other models, achieved 100% precision in classifying Squamous Cells. The Fusion Model and ResNet-50 achieved a precision of 90%, closely followed by EfficientNet-B3 and ResNet-101 with slightly lower precision. To prevent overfitting and improve data collection and planning, the authors implemented a data extension strategy. The relationship between acquiring knowledge and reaching specific scores was also connected to advancing and addressing the issue of imprecise accuracy, ultimately contributing to advancements in health and a reduction in the mortality rate associated with lung cancer.
Collapse
Affiliation(s)
- Vinod Kumar
- Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab, India
| | - Chander Prabha
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Preeti Sharma
- Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India
| | - Nitin Mittal
- Skill Faculty of Engineering and Technology, Shri Vishwakarma Skill University, Palwal, Haryana, India.
| | - S S Askar
- Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, 11451, Riyadh, Saudi Arabia
| | - Mohamed Abouhawwash
- Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, 35516, Egypt
| |
Collapse
|
7
|
Ba ZC, Zhang HX, Liu AY, Zhou XX, Liu L, Wang XY, Nanding A, Sang XQ, Kuai ZX. Combination of DCE-MRI and NME-DWI via Deep Neural Network for Predicting Breast Cancer Molecular Subtypes. Clin Breast Cancer 2024:S1526-8209(24)00079-X. [PMID: 38555225 DOI: 10.1016/j.clbc.2024.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 03/06/2024] [Accepted: 03/08/2024] [Indexed: 04/02/2024]
Abstract
BACKGROUND To explore whether the combination of dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) and nonmono-exponential (NME) model-based diffusion-weighted imaging (DWI) via deep neural network (DNN) can improve the prediction of breast cancer molecular subtypes compared to either imaging technique used alone. PATIENTS AND METHODS This prospective study examined 480 breast cancers in 475 patients undergoing DCE-MRI and NME-DWI at 3.0 T. Breast cancers were classified as follows: human epidermal growth factor receptor 2 enriched (HER2-enriched), luminal A, luminal B (HER2-), luminal B (HER2+), and triple-negative subtypes. A total of 20% cases were withheld as an independent test dataset, and the remaining cases were used to train DNN with an 80% to 20% training-validation split and 5-fold cross-validation. The diagnostic accuracies of DNN in 5-way subtype classification between the DCE-MRI, NME-DWI, and their combined multiparametric-MRI datasets were compared using analysis of variance with least significant difference posthoc test. Areas under the receiver-operating characteristic curves were calculated to assess the performances of DNN in binary subtype classification between the 3 datasets. RESULTS The 5-way classification accuracies of DNN on both DCE-MRI (0.71) and NME-DWI (0.64) were significantly lower (P < .05) than on multiparametric-MRI (0.76), while on DCE-MRI was significantly higher (P < .05) than on NME-DWI. The comparative results of binary classification between the 3 datasets were consistent with the 5-way classification. CONCLUSION The combination of DCE-MRI and NME-DWI via DNN achieved a significant improvement in breast cancer molecular subtype prediction compared to either imaging technique used alone. Additionally, DCE-MRI outperformed NME-DWI in differentiating subtypes.
Collapse
Affiliation(s)
- Zhi-Chang Ba
- Imaging Center, Harbin Medical University Cancer Hospital, Harbin, China
| | - Hong-Xia Zhang
- Imaging Center, Harbin Medical University Cancer Hospital, Harbin, China
| | - Ao-Yu Liu
- Imaging Center, Harbin Medical University Cancer Hospital, Harbin, China
| | - Xin-Xiang Zhou
- Imaging Center, Harbin Medical University Cancer Hospital, Harbin, China
| | - Lu Liu
- Imaging Center, Harbin Medical University Cancer Hospital, Harbin, China
| | - Xin-Yi Wang
- Imaging Center, Harbin Medical University Cancer Hospital, Harbin, China
| | - Abiyasi Nanding
- Imaging Center, Harbin Medical University Cancer Hospital, Harbin, China
| | - Xi-Qiao Sang
- Division of Respiratory Disease, Fourth Affiliated Hospital of Harbin Medical University, Yiyuan street No.37, Nangang District, Harbin, China.
| | - Zi-Xiang Kuai
- Imaging Center, Harbin Medical University Cancer Hospital, Harbin, China.
| |
Collapse
|
8
|
Harrison P, Hasan R, Park K. State-of-the-Art of Breast Cancer Diagnosis in Medical Images via Convolutional Neural Networks (CNNs). JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2023; 7:387-432. [PMID: 37927373 PMCID: PMC10620373 DOI: 10.1007/s41666-023-00144-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 08/14/2023] [Accepted: 08/22/2023] [Indexed: 11/07/2023]
Abstract
Early detection of breast cancer is crucial for a better prognosis. Various studies have been conducted where tumor lesions are detected and localized on images. This is a narrative review where the studies reviewed are related to five different image modalities: histopathological, mammogram, magnetic resonance imaging (MRI), ultrasound, and computed tomography (CT) images, making it different from other review studies where fewer image modalities are reviewed. The goal is to have the necessary information, such as pre-processing techniques and CNN-based diagnosis techniques for the five modalities, readily available in one place for future studies. Each modality has pros and cons, such as mammograms might give a high false positive rate for radiographically dense breasts, while ultrasounds with low soft tissue contrast result in early-stage false detection, and MRI provides a three-dimensional volumetric image, but it is expensive and cannot be used as a routine test. Various studies were manually reviewed using particular inclusion and exclusion criteria; as a result, 91 recent studies that classify and detect tumor lesions on breast cancer images from 2017 to 2022 related to the five image modalities were included. For histopathological images, the maximum accuracy achieved was around 99 % , and the maximum sensitivity achieved was 97.29 % by using DenseNet, ResNet34, and ResNet50 architecture. For mammogram images, the maximum accuracy achieved was 96.52 % using a customized CNN architecture. For MRI, the maximum accuracy achieved was 98.33 % using customized CNN architecture. For ultrasound, the maximum accuracy achieved was around 99 % by using DarkNet-53, ResNet-50, G-CNN, and VGG. For CT, the maximum sensitivity achieved was 96 % by using Xception architecture. Histopathological and ultrasound images achieved higher accuracy of around 99 % by using ResNet34, ResNet50, DarkNet-53, G-CNN, and VGG compared to other modalities for either of the following reasons: use of pre-trained architectures with pre-processing techniques, use of modified architectures with pre-processing techniques, use of two-stage CNN, and higher number of studies available for Artificial Intelligence (AI)/machine learning (ML) researchers to reference. One of the gaps we found is that only a single image modality is used for CNN-based diagnosis; in the future, a multiple image modality approach can be used to design a CNN architecture with higher accuracy.
Collapse
Affiliation(s)
- Pratibha Harrison
- Department of Computer and Information Science, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| | - Rakib Hasan
- Department of Mechanical Engineering, Khulna University of Engineering & Technology, PhulBari Gate, Khulna, 9203 Bangladesh
| | - Kihan Park
- Department of Mechanical Engineering, University of Massachusetts Dartmouth, 285 Old Westport Rd, North Dartmouth, 02747 MA USA
| |
Collapse
|
9
|
Liu M, Zhang S, Du Y, Zhang X, Wang D, Ren W, Sun J, Yang S, Zhang G. Identification of Luminal A breast cancer by using deep learning analysis based on multi-modal images. Front Oncol 2023; 13:1243126. [PMID: 38044991 PMCID: PMC10691590 DOI: 10.3389/fonc.2023.1243126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 11/06/2023] [Indexed: 12/05/2023] Open
Abstract
Purpose To evaluate the diagnostic performance of a deep learning model based on multi-modal images in identifying molecular subtype of breast cancer. Materials and methods A total of 158 breast cancer patients (170 lesions, median age, 50.8 ± 11.0 years), including 78 Luminal A subtype and 92 non-Luminal A subtype lesions, were retrospectively analyzed and divided into a training set (n = 100), test set (n = 45), and validation set (n = 25). Mammography (MG) and magnetic resonance imaging (MRI) images were used. Five single-mode models, i.e., MG, T2-weighted imaging (T2WI), diffusion weighting imaging (DWI), axial apparent dispersion coefficient (ADC), and dynamic contrast-enhanced MRI (DCE-MRI), were selected. The deep learning network ResNet50 was used as the basic feature extraction and classification network to construct the molecular subtype identification model. The receiver operating characteristic curve were used to evaluate the prediction efficiency of each model. Results The accuracy, sensitivity and specificity of a multi-modal tool for identifying Luminal A subtype were 0.711, 0.889, and 0.593, respectively, and the area under the curve (AUC) was 0.802 (95% CI, 0.657- 0.906); the accuracy, sensitivity, and AUC were higher than those of any single-modal model, but the specificity was slightly lower than that of DCE-MRI model. The AUC value of MG, T2WI, DWI, ADC, and DCE-MRI model was 0.593 (95%CI, 0.436-0.737), 0.700 (95%CI, 0.545-0.827), 0.564 (95%CI, 0.408-0.711), 0.679 (95%CI, 0.523-0.810), and 0.553 (95%CI, 0.398-0.702), respectively. Conclusion The combination of deep learning and multi-modal imaging is of great significance for diagnosing breast cancer subtypes and selecting personalized treatment plans for doctors.
Collapse
Affiliation(s)
- Menghan Liu
- Department of Health Management, The First Affiliated Hospital of Shandong First Medical University & Shandong Engineering Laboratory for Health Management, Shandong Medicine and Health Key Laboratory of Laboratory Medicine, Shandong Provincial Qianfoshan Hospital, Jinan, China
| | - Shuai Zhang
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
- Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
| | - Yanan Du
- Department of Health Management, The First Affiliated Hospital of Shandong First Medical University & Shandong Engineering Laboratory for Health Management, Shandong Medicine and Health Key Laboratory of Laboratory Medicine, Shandong Provincial Qianfoshan Hospital, Jinan, China
| | - Xiaodong Zhang
- Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
| | - Dawei Wang
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University, Jinan, China
| | - Wanqing Ren
- Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
| | - Jingxiang Sun
- Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
| | - Shiwei Yang
- Department of Anorectal Surgery, The First Affiliated Hospital of Shandong First Medical University, Jinan, China
| | - Guang Zhang
- Department of Health Management, The First Affiliated Hospital of Shandong First Medical University & Shandong Engineering Laboratory for Health Management, Shandong Medicine and Health Key Laboratory of Laboratory Medicine, Shandong Provincial Qianfoshan Hospital, Jinan, China
| |
Collapse
|
10
|
You C, Shen Y, Sun S, Zhou J, Li J, Su G, Michalopoulou E, Peng W, Gu Y, Guo W, Cao H. Artificial intelligence in breast imaging: Current situation and clinical challenges. EXPLORATION (BEIJING, CHINA) 2023; 3:20230007. [PMID: 37933287 PMCID: PMC10582610 DOI: 10.1002/exp.20230007] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 04/30/2023] [Indexed: 11/08/2023]
Abstract
Breast cancer ranks among the most prevalent malignant tumours and is the primary contributor to cancer-related deaths in women. Breast imaging is essential for screening, diagnosis, and therapeutic surveillance. With the increasing demand for precision medicine, the heterogeneous nature of breast cancer makes it necessary to deeply mine and rationally utilize the tremendous amount of breast imaging information. With the rapid advancement of computer science, artificial intelligence (AI) has been noted to have great advantages in processing and mining of image information. Therefore, a growing number of scholars have started to focus on and research the utility of AI in breast imaging. Here, an overview of breast imaging databases and recent advances in AI research are provided, the challenges and problems in this field are discussed, and then constructive advice is further provided for ongoing scientific developments from the perspective of the National Natural Science Foundation of China.
Collapse
Affiliation(s)
- Chao You
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Yiyuan Shen
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Shiyun Sun
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Jiayin Zhou
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Jiawei Li
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Guanhua Su
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
- Department of Breast SurgeryKey Laboratory of Breast Cancer in ShanghaiFudan University Shanghai Cancer CenterShanghaiChina
| | | | - Weijun Peng
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Yajia Gu
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Weisheng Guo
- Department of Minimally Invasive Interventional RadiologyKey Laboratory of Molecular Target and Clinical PharmacologySchool of Pharmaceutical Sciences and The Second Affiliated HospitalGuangzhou Medical UniversityGuangzhouChina
| | - Heqi Cao
- Department of Health SciencesNational Natural Science Foundation of ChinaBeijingChina
| |
Collapse
|
11
|
Yue WY, Zhang HT, Gao S, Li G, Sun ZY, Tang Z, Cai JM, Tian N, Zhou J, Dong JH, Liu Y, Bai X, Sheng FG. Predicting Breast Cancer Subtypes Using Magnetic Resonance Imaging Based Radiomics With Automatic Segmentation. J Comput Assist Tomogr 2023; 47:729-737. [PMID: 37707402 PMCID: PMC10510832 DOI: 10.1097/rct.0000000000001474] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 02/02/2023] [Indexed: 05/21/2023]
Abstract
OBJECTIVE The aim of the study is to demonstrate whether radiomics based on an automatic segmentation method is feasible for predicting molecular subtypes. METHODS This retrospective study included 516 patients with confirmed breast cancer. An automatic segmentation-3-dimensional UNet-based Convolutional Neural Networks, trained on our in-house data set-was applied to segment the regions of interest. A set of 1316 radiomics features per region of interest was extracted. Eighteen cross-combination radiomics methods-with 6 feature selection methods and 3 classifiers-were used for model selection. Model classification performance was assessed using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity. RESULTS The average dice similarity coefficient value of the automatic segmentation was 0.89. The radiomics models were predictive of 4 molecular subtypes with the best average: AUC = 0.8623, accuracy = 0.6596, sensitivity = 0.6383, and specificity = 0.8775. For luminal versus nonluminal subtypes, AUC = 0.8788 (95% confidence interval [CI], 0.8505-0.9071), accuracy = 0.7756, sensitivity = 0.7973, and specificity = 0.7466. For human epidermal growth factor receptor 2 (HER2)-enriched versus non-HER2-enriched subtypes, AUC = 0.8676 (95% CI, 0.8370-0.8982), accuracy = 0.7737, sensitivity = 0.8859, and specificity = 0.7283. For triple-negative breast cancer versus non-triple-negative breast cancer subtypes, AUC = 0.9335 (95% CI, 0.9027-0.9643), accuracy = 0.9110, sensitivity = 0.4444, and specificity = 0.9865. CONCLUSIONS Radiomics based on automatic segmentation of magnetic resonance imaging can predict breast cancer of 4 molecular subtypes noninvasively and is potentially applicable in large samples.
Collapse
Affiliation(s)
- Wen-Yi Yue
- From the Fifth Medical Center of Chinese PLA General Hospital
- Chinese PLA General Medical School
| | - Hong-Tao Zhang
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Shen Gao
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Guang Li
- Keya Medical Technology Co, Ltd, Beijing, China
| | - Ze-Yu Sun
- Keya Medical Technology Co, Ltd, Beijing, China
| | - Zhe Tang
- Keya Medical Technology Co, Ltd, Beijing, China
| | - Jian-Ming Cai
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Ning Tian
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Juan Zhou
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Jing-Hui Dong
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Yuan Liu
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Xu Bai
- From the Fifth Medical Center of Chinese PLA General Hospital
| | - Fu-Geng Sheng
- From the Fifth Medical Center of Chinese PLA General Hospital
| |
Collapse
|
12
|
Jin N, Qiao B, Zhao M, Li L, Zhu L, Zang X, Gu B, Zhang H. Predicting cervical lymph node metastasis in OSCC based on computed tomography imaging genomics. Cancer Med 2023; 12:19260-19271. [PMID: 37635388 PMCID: PMC10557859 DOI: 10.1002/cam4.6474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 08/01/2023] [Accepted: 08/15/2023] [Indexed: 08/29/2023] Open
Abstract
BACKGROUND To investigate the correlation between computed tomography (CT) radiomic characteristics and key genes for cervical lymph node metastasis (LNM) in oral squamous cell carcinoma (OSCC). METHODS The region of interest was annotated at the edge of the primary tumor on enhanced CT images from 140 patients with OSCC and obtained radiomic features. Ribonucleic acid (RNA) sequencing was performed on pathological sections from 20 patients. the DESeq software package was used to compare differential gene expression between groups. Weighted gene co-expression network analysis was used to construct co-expressed gene modules, and the KEGG and GO databases were used for pathway enrichment analysis of key gene modules. Finally, Pearson correlation coefficients were calculated between key genes of enriched pathways and radiomic features. RESULTS Four hundred and eighty radiomic features were extracted from enhanced CT images of 140 patients; seven of these correlated significantly with cervical LNM in OSCC (p < 0.01). A total of 3527 differentially expressed RNAs were screened from RNA sequencing data of 20 cases. original_glrlm_RunVariance showed significant positive correlation with most long noncoding RNAs. CONCLUSIONS OSCC cervical LNM is related to the salivary hair bump signaling pathway and biological process. Original_glrlm_RunVariance correlated with LNM and most differentially expressed long noncoding RNAs.
Collapse
Affiliation(s)
- Nenghao Jin
- Medical School of Chinese PLABeijingChina
- Department of Stomatology, The First Medical CentreChinese PLA General HospitalBeijingChina
| | - Bo Qiao
- Medical School of Chinese PLABeijingChina
- Department of Stomatology, The First Medical CentreChinese PLA General HospitalBeijingChina
| | - Min Zhao
- Pharmaceutical Diagnostics, GE HealthcareBeijingChina
- Research Center of Medical Big Data, Chinese PLA General HospitalBeijingChina
| | - Liangbo Li
- Medical School of Chinese PLABeijingChina
- Department of Stomatology, The First Medical CentreChinese PLA General HospitalBeijingChina
| | - Liang Zhu
- Medical School of Chinese PLABeijingChina
- Department of Stomatology, The First Medical CentreChinese PLA General HospitalBeijingChina
| | - Xiaoyi Zang
- Medical School of Chinese PLABeijingChina
- Department of Stomatology, The First Medical CentreChinese PLA General HospitalBeijingChina
| | - Bin Gu
- Department of Stomatology, The First Medical CentreChinese PLA General HospitalBeijingChina
| | - Haizhong Zhang
- Department of Stomatology, The First Medical CentreChinese PLA General HospitalBeijingChina
| |
Collapse
|
13
|
Liu Z, Duan T, Zhang Y, Weng S, Xu H, Ren Y, Zhang Z, Han X. Radiogenomics: a key component of precision cancer medicine. Br J Cancer 2023; 129:741-753. [PMID: 37414827 PMCID: PMC10449908 DOI: 10.1038/s41416-023-02317-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 05/02/2023] [Accepted: 06/12/2023] [Indexed: 07/08/2023] Open
Abstract
Radiogenomics, focusing on the relationship between genomics and imaging phenotypes, has been widely applied to address tumour heterogeneity and predict immune responsiveness and progression. It is an inevitable consequence of current trends in precision medicine, as radiogenomics costs less than traditional genetic sequencing and provides access to whole-tumour information rather than limited biopsy specimens. By providing voxel-by-voxel genetic information, radiogenomics can allow tailored therapy targeting a complete, heterogeneous tumour or set of tumours. In addition to quantifying lesion characteristics, radiogenomics can also be used to distinguish benign from malignant entities, as well as patient characteristics, to better stratify patients according to disease risk, thereby enabling more precise imaging and screening. Here, we have characterised the radiogenomic application in precision medicine using a multi-omic approach. we outline the main applications of radiogenomics in diagnosis, treatment planning and evaluations in the field of oncology with the aim of developing quantitative and personalised medicine. Finally, we discuss the challenges in the field of radiogenomics and the scope and clinical applicability of these methods.
Collapse
Affiliation(s)
- Zaoqu Liu
- Department of Interventional Radiology, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, Henan, China
- Interventional Institute of Zhengzhou University, 450052, Zhengzhou, Henan, China
- Interventional Treatment and Clinical Research Center of Henan Province, 450052, Zhengzhou, Henan, China
| | - Tian Duan
- Department of Interventional Radiology, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, Henan, China
| | - Yuyuan Zhang
- Department of Interventional Radiology, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, Henan, China
| | - Siyuan Weng
- Department of Interventional Radiology, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, Henan, China
| | - Hui Xu
- Department of Interventional Radiology, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, Henan, China
| | - Yuqing Ren
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, Henan, China
| | - Zhenyu Zhang
- Department of Neurosurgery, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, Henan, China.
| | - Xinwei Han
- Department of Interventional Radiology, The First Affiliated Hospital of Zhengzhou University, 450052, Zhengzhou, Henan, China.
- Interventional Institute of Zhengzhou University, 450052, Zhengzhou, Henan, China.
- Interventional Treatment and Clinical Research Center of Henan Province, 450052, Zhengzhou, Henan, China.
| |
Collapse
|
14
|
Sun R, Wei C, Jiang Z, Huang G, Xie Y, Nie S. Weakly Supervised Breast Lesion Detection in Dynamic Contrast-Enhanced MRI. J Digit Imaging 2023; 36:1553-1564. [PMID: 37253896 PMCID: PMC10406986 DOI: 10.1007/s10278-023-00846-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 05/05/2023] [Accepted: 05/08/2023] [Indexed: 06/01/2023] Open
Abstract
Currently, obtaining accurate medical annotations requires high labor and time effort, which largely limits the development of supervised learning-based tumor detection tasks. In this work, we investigated a weakly supervised learning model for detecting breast lesions in dynamic contrast-enhanced MRI (DCE-MRI) with only image-level labels. Two hundred fifty-four normal and 398 abnormal cases with pathologically confirmed lesions were retrospectively enrolled into the breast dataset, which was divided into the training set (80%), validation set (10%), and testing set (10%) at the patient level. First, the second image series S2 after the injection of a contrast agent was acquired from the 3.0-T, T1-weighted dynamic enhanced MR imaging sequences. Second, a feature pyramid network (FPN) with convolutional block attention module (CBAM) was proposed to extract multi-scale feature maps of the modified classification network VGG16. Then, initial location information was obtained from the heatmaps generated using the layer class activation mapping algorithm (Layer-CAM). Finally, the detection results of breast lesion were refined by the conditional random field (CRF). Accuracy, sensitivity, specificity, and area under the receiver operating characteristic (ROC) curve (AUC) were utilized for evaluation of image-level classification. Average precision (AP) was estimated for breast lesion localization. Delong's test was used to compare the AUCs of different models for significance. The proposed model was effective with accuracy of 95.2%, sensitivity of 91.6%, specificity of 99.2%, and AUC of 0.986. The AP for breast lesion detection was 84.1% using weakly supervised learning. Weakly supervised learning based on FPN combined with Layer-CAM facilitated automatic detection of breast lesion.
Collapse
Affiliation(s)
- Rong Sun
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No. 516 Jun-Gong Road, Shanghai, 200093, China
| | - Chuanling Wei
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No. 516 Jun-Gong Road, Shanghai, 200093, China
| | - Zhuoyun Jiang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No. 516 Jun-Gong Road, Shanghai, 200093, China
| | - Gang Huang
- Shanghai University of Medicine & Health Sciences, Shanghai, China
| | - Yuanzhong Xie
- Medical Imaging Center, Tai'an Central Hospital, No. 29 Long-Tan Road, Shandong, 271099, China.
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, No. 516 Jun-Gong Road, Shanghai, 200093, China.
| |
Collapse
|
15
|
Kovačević L, Štajduhar A, Stemberger K, Korša L, Marušić Z, Prutki M. Breast Cancer Surrogate Subtype Classification Using Pretreatment Multi-Phase Dynamic Contrast-Enhanced Magnetic Resonance Imaging Radiomics: A Retrospective Single-Center Study. J Pers Med 2023; 13:1150. [PMID: 37511763 PMCID: PMC10381456 DOI: 10.3390/jpm13071150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 07/12/2023] [Accepted: 07/13/2023] [Indexed: 07/30/2023] Open
Abstract
This study aimed to explore the potential of multi-phase dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) radiomics for classifying breast cancer surrogate subtypes. This retrospective study analyzed 360 breast cancers from 319 patients who underwent pretreatment DCE-MRI between January 2015 and January 2019. The cohort consisted of 33 triple-negative, 26 human epidermal growth factor receptor 2 (HER2)-positive, 109 luminal A-like, 144 luminal B-like HER2-negative, and 48 luminal B-like HER2-positive lesions. A total of 1781 radiomic features were extracted from manually segmented breast cancers in each DCE-MRI sequence. The model was internally validated and selected using ten times repeated five-fold cross-validation on the primary cohort, with further evaluation using a validation cohort. The most successful models were logistic regression models applied to the third post-contrast subtraction images. These models exhibited the highest area under the curve (AUC) for discriminating between luminal A like vs. others (AUC: 0.78), luminal B-like HER2 negative vs. others (AUC: 0.57), luminal B-like HER2 positive vs. others (AUC: 0.60), HER2 positive vs. others (AUC: 0.81), and triple negative vs. others (AUC: 0.83). In conclusion, the radiomic features extracted from multi-phase DCE-MRI are promising for discriminating between breast cancer subtypes. The best-performing models relied on tissue changes observed during the mid-stage of the imaging process.
Collapse
Affiliation(s)
- Lucija Kovačević
- Clinical Department of Diagnostic and Interventional Radiology, University Hospital Centre Zagreb, Kispaticeva 12, 10000 Zagreb, Croatia; (L.K.); (M.P.)
| | - Andrija Štajduhar
- Department for Medical Statistics, Epidemiology and Medical Informatics School of Medicine, University of Zagreb, Salata 12, 10000 Zagreb, Croatia
| | - Karlo Stemberger
- Clinical Department of Diagnostic and Interventional Radiology, University Hospital Centre Zagreb, Kispaticeva 12, 10000 Zagreb, Croatia; (L.K.); (M.P.)
| | - Lea Korša
- Clinical Department of Pathology and Cytology, University Hospital Centre Zagreb, Kispaticeva 12, 10000 Zagreb, Croatia
| | - Zlatko Marušić
- Clinical Department of Pathology and Cytology, University Hospital Centre Zagreb, Kispaticeva 12, 10000 Zagreb, Croatia
| | - Maja Prutki
- Clinical Department of Diagnostic and Interventional Radiology, University Hospital Centre Zagreb, Kispaticeva 12, 10000 Zagreb, Croatia; (L.K.); (M.P.)
- School of Medicine, University of Zagreb, Salata 3, 10000 Zagreb, Croatia
| |
Collapse
|
16
|
Zhao F, Nie J, Ma M, Chen X, He X, Wang B, Hou Y. Assessing the Role of Different Heterogeneous Regions in DCE-MRI for Predicting Molecular Subtypes of Breast Cancer based on Network Architecture Search and Vision Transformer. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083342 DOI: 10.1109/embc40787.2023.10340066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Breast cancer, the most common female malignancy, is highly heterogeneous, manifesting as different molecular subtypes. It is clinically important to distinguish between these molecular subtypes due to marked differences in prognosis, treatment and survival outcomes. In this study, we first performed convex analysis of mixtures (CAM) analysis on both intratumoral and peritumoral regions in DCE-MRI to generate multiple heterogeneous regions. Then, we developed a vision transformer (ViT)-based DL model and performed network architecture search (NAS) to evaluate all the combination of different heterogeneous regions for predicting molecular subtypes of breast cancer. Experimental results showed that the input plasma from both peritumoral and intratumoral regions, and the fast-flow kinetics from intratumoral regions were critical for predicting different molecular subtypes, achieving an area under receiver operating characteristic curve (AUROC) value of 0.66-0.68.Clinical Relevance- This study reduces the redundancy in multiple heterogeneous subregions and supports the precise prediction of molecular subtypes, which is of potential importance for the medicine care and treatment planning of patients with breast cancer.
Collapse
|
17
|
Zhang X, Dong X, Saripan MIB, Du D, Wu Y, Wang Z, Cao Z, Wen D, Liu Y, Marhaban MH. Deep learning PET/CT-based radiomics integrates clinical data: A feasibility study to distinguish between tuberculosis nodules and lung cancer. Thorac Cancer 2023. [PMID: 37183577 DOI: 10.1111/1759-7714.14924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 04/21/2023] [Accepted: 04/22/2023] [Indexed: 05/16/2023] Open
Abstract
BACKGROUND Radiomic diagnosis models generally consider only a single dimension of information, leading to limitations in their diagnostic accuracy and reliability. The integration of multiple dimensions of information into the deep learning model have the potential to improve its diagnostic capabilities. The purpose of study was to evaluate the performance of deep learning model in distinguishing tuberculosis (TB) nodules and lung cancer (LC) based on deep learning features, radiomic features, and clinical information. METHODS Positron emission tomography (PET) and computed tomography (CT) image data from 97 patients with LC and 77 patients with TB nodules were collected. One hundred radiomic features were extracted from both PET and CT imaging using the pyradiomics platform, and 2048 deep learning features were obtained through a residual neural network approach. Four models included traditional machine learning model with radiomic features as input (traditional radiomics), a deep learning model with separate input of image features (deep convolutional neural networks [DCNN]), a deep learning model with two inputs of radiomic features and deep learning features (radiomics-DCNN) and a deep learning model with inputs of radiomic features and deep learning features and clinical information (integrated model). The models were evaluated using area under the curve (AUC), sensitivity, accuracy, specificity, and F1-score metrics. RESULTS The results of the classification of TB nodules and LC showed that the integrated model achieved an AUC of 0.84 (0.82-0.88), sensitivity of 0.85 (0.80-0.88), and specificity of 0.84 (0.83-0.87), performing better than the other models. CONCLUSION The integrated model was found to be the best classification model in the diagnosis of TB nodules and solid LC.
Collapse
Affiliation(s)
- Xiaolei Zhang
- Faculty of Engineering, Universiti Putra Malaysia, Serdang, Malaysia
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
| | - Xianling Dong
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
- Hebei International Research Center of Medical Engineering and Hebei Provincial Key Laboratory of Nerve Injury and Repair, Chengde Medical University, Chengde, Hebei, China
| | | | - Dongyang Du
- School of Biomedical Engineering and Guangdong Province Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Yanjun Wu
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
| | - Zhongxiao Wang
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
| | - Zhendong Cao
- Department of Radiology, the Affiliated Hospital of Chengde Medical University, Chengde, China
| | - Dong Wen
- Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing, China
| | - Yanli Liu
- Department of Biomedical Engineering, Chengde Medical University, Chengde, Hebei, China
| | | |
Collapse
|
18
|
Zhang T, Tan T, Han L, Appelman L, Veltman J, Wessels R, Duvivier KM, Loo C, Gao Y, Wang X, Horlings HM, Beets-Tan RGH, Mann RM. Predicting breast cancer types on and beyond molecular level in a multi-modal fashion. NPJ Breast Cancer 2023; 9:16. [PMID: 36949047 PMCID: PMC10033710 DOI: 10.1038/s41523-023-00517-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Accepted: 02/21/2023] [Indexed: 03/24/2023] Open
Abstract
Accurately determining the molecular subtypes of breast cancer is important for the prognosis of breast cancer patients and can guide treatment selection. In this study, we develop a deep learning-based model for predicting the molecular subtypes of breast cancer directly from the diagnostic mammography and ultrasound images. Multi-modal deep learning with intra- and inter-modality attention modules (MDL-IIA) is proposed to extract important relations between mammography and ultrasound for this task. MDL-IIA leads to the best diagnostic performance compared to other cohort models in predicting 4-category molecular subtypes with Matthews correlation coefficient (MCC) of 0.837 (95% confidence interval [CI]: 0.803, 0.870). The MDL-IIA model can also discriminate between Luminal and Non-Luminal disease with an area under the receiver operating characteristic curve of 0.929 (95% CI: 0.903, 0.951). These results significantly outperform clinicians' predictions based on radiographic imaging. Beyond molecular-level test, based on gene-level ground truth, our method can bypass the inherent uncertainty from immunohistochemistry test. This work thus provides a noninvasive method to predict the molecular subtypes of breast cancer, potentially guiding treatment selection for breast cancer patients and providing decision support for clinicians.
Collapse
Affiliation(s)
- Tianyu Zhang
- Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
- GROW School for Oncology and Development Biology, Maastricht University, P. O. Box 616, 6200 MD, Maastricht, The Netherlands
- Department of Diagnostic Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
| | - Tao Tan
- Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands.
- Faculty of Applied Sciences, Macao Polytechnic University, 999078, Macao SAR, China.
| | - Luyi Han
- Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
- Department of Diagnostic Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
| | - Linda Appelman
- Department of Diagnostic Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
| | - Jeroen Veltman
- Department of Radiology, Hospital Group Twente (ZGT), Almelo, The Netherlands
- Multi-Modality Medical Imaging Group, TechMed Centre, University of Twente, Enschede, The Netherlands
| | - Ronni Wessels
- Department of Radiology, Haga Teaching Hospital, The Hague, The Netherlands
| | - Katya M Duvivier
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Claudette Loo
- Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Yuan Gao
- Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
- GROW School for Oncology and Development Biology, Maastricht University, P. O. Box 616, 6200 MD, Maastricht, The Netherlands
- Department of Diagnostic Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
| | - Xin Wang
- Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
- GROW School for Oncology and Development Biology, Maastricht University, P. O. Box 616, 6200 MD, Maastricht, The Netherlands
- Department of Diagnostic Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
| | - Hugo M Horlings
- Division of Pathology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Regina G H Beets-Tan
- Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
- GROW School for Oncology and Development Biology, Maastricht University, P. O. Box 616, 6200 MD, Maastricht, The Netherlands
| | - Ritse M Mann
- Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
- Department of Diagnostic Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
| |
Collapse
|
19
|
Field EL, Tam W, Moore N, McEntee M. Efficacy of Artificial Intelligence in the Categorisation of Paediatric Pneumonia on Chest Radiographs: A Systematic Review. CHILDREN 2023; 10:children10030576. [PMID: 36980134 PMCID: PMC10047666 DOI: 10.3390/children10030576] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/04/2023] [Accepted: 03/15/2023] [Indexed: 03/19/2023]
Abstract
This study aimed to systematically review the literature to synthesise and summarise the evidence surrounding the efficacy of artificial intelligence (AI) in classifying paediatric pneumonia on chest radiographs (CXRs). Following the initial search of studies that matched the pre-set criteria, their data were extracted using a data extraction tool, and the included studies were assessed via critical appraisal tools and risk of bias. Results were accumulated, and outcome measures analysed included sensitivity, specificity, accuracy, and area under the curve (AUC). Five studies met the inclusion criteria. The highest sensitivity was by an ensemble AI algorithm (96.3%). DenseNet201 obtained the highest level of specificity and accuracy (94%, 95%). The most outstanding AUC value was achieved by the VGG16 algorithm (96.2%). Some of the AI models achieved close to 100% diagnostic accuracy. To assess the efficacy of AI in a clinical setting, these AI models should be compared to that of radiologists. The included and evaluated AI algorithms showed promising results. These algorithms can potentially ease and speed up diagnosis once the studies are replicated and their performances are assessed in clinical settings, potentially saving millions of lives.
Collapse
Affiliation(s)
- Erica Louise Field
- Discipline of Medical Imaging and Radiation Therapy, University College Cork, College Road, T12 K8AF Cork, Ireland
| | - Winnie Tam
- Department of Midwifery and Radiography, University of London, Northampton Square, London EC1V 0HB, UK
- Correspondence:
| | - Niamh Moore
- Discipline of Medical Imaging and Radiation Therapy, University College Cork, College Road, T12 K8AF Cork, Ireland
| | - Mark McEntee
- Discipline of Medical Imaging and Radiation Therapy, University College Cork, College Road, T12 K8AF Cork, Ireland
| |
Collapse
|
20
|
Nasser M, Yusof UK. Deep Learning Based Methods for Breast Cancer Diagnosis: A Systematic Review and Future Direction. Diagnostics (Basel) 2023; 13:diagnostics13010161. [PMID: 36611453 PMCID: PMC9818155 DOI: 10.3390/diagnostics13010161] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 12/19/2022] [Accepted: 12/19/2022] [Indexed: 01/06/2023] Open
Abstract
Breast cancer is one of the precarious conditions that affect women, and a substantive cure has not yet been discovered for it. With the advent of Artificial intelligence (AI), recently, deep learning techniques have been used effectively in breast cancer detection, facilitating early diagnosis and therefore increasing the chances of patients' survival. Compared to classical machine learning techniques, deep learning requires less human intervention for similar feature extraction. This study presents a systematic literature review on the deep learning-based methods for breast cancer detection that can guide practitioners and researchers in understanding the challenges and new trends in the field. Particularly, different deep learning-based methods for breast cancer detection are investigated, focusing on the genomics and histopathological imaging data. The study specifically adopts the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), which offer a detailed analysis and synthesis of the published articles. Several studies were searched and gathered, and after the eligibility screening and quality evaluation, 98 articles were identified. The results of the review indicated that the Convolutional Neural Network (CNN) is the most accurate and extensively used model for breast cancer detection, and the accuracy metrics are the most popular method used for performance evaluation. Moreover, datasets utilized for breast cancer detection and the evaluation metrics are also studied. Finally, the challenges and future research direction in breast cancer detection based on deep learning models are also investigated to help researchers and practitioners acquire in-depth knowledge of and insight into the area.
Collapse
|
21
|
Sun L, Tian H, Ge H, Tian J, Lin Y, Liang C, Liu T, Zhao Y. Cross-attention multi-branch CNN using DCE-MRI to classify breast cancer molecular subtypes. Front Oncol 2023; 13:1107850. [PMID: 36959806 PMCID: PMC10028183 DOI: 10.3389/fonc.2023.1107850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 02/20/2023] [Indexed: 03/09/2023] Open
Abstract
Purpose The aim of this study is to improve the accuracy of classifying luminal or non-luminal subtypes of breast cancer by using computer algorithms based on DCE-MRI, and to validate the diagnostic efficacy of the model by considering the patient's age of menarche and nodule size. Methods DCE-MRI images of patients with non-specific invasive breast cancer admitted to the Second Affiliated Hospital of Dalian Medical University were collected. There were 160 cases in total, with 84 cases of luminal type (luminal A and luminal B and 76 cases of non-luminal type (HER 2 overexpressing and triple negative). Patients were grouped according to thresholds of nodule sizes of 20 mm and age at menarche of 14 years. A cross-attention multi-branch net CAMBNET) was proposed based on the dataset to predict the molecular subtypes of breast cancer. Diagnostic performance was assessed by accuracy, sensitivity, specificity, F1 and area under the ROC curve (AUC). And the model is visualized with Grad-CAM. Results Several classical deep learning models were included for diagnostic performance comparison. Using 5-fold cross-validation on the test dataset, all the results of CAMBNET are significantly higher than the compared deep learning models. The average prediction recall, accuracy, precision, and AUC for luminal and non-luminal types of the dataset were 89.11%, 88.44%, 88.52%, and 96.10%, respectively. For patients with tumor size <20 mm, the CAMBNET had AUC of 83.45% and ACC of 90.29% for detecting triple-negative breast cancer. When classifying luminal from non-luminal subtypes for patients with age at menarche years, our CAMBNET model achieved an ACC of 92.37%, precision of 92.42%, recall of 93.33%, F1of 92.33%, and AUC of 99.95%. Conclusions The CAMBNET can be applied in molecular subtype classification of breasts. For patients with menarche at 14 years old, our model can yield more accurate results when classifying luminal and non-luminal subtypes. For patients with tumor sizes ≤20 mm, our model can yield more accurate result in detecting triple-negative breast cancer to improve patient prognosis and survival.
Collapse
Affiliation(s)
- Liang Sun
- The College of Computer Science and Technology, Dalian University of Technology, Dalian, Liaoning, China
| | - Haowen Tian
- The College of Computer Science and Technology, Dalian University of Technology, Dalian, Liaoning, China
| | - Hongwei Ge
- The College of Computer Science and Technology, Dalian University of Technology, Dalian, Liaoning, China
| | - Juan Tian
- Department of Radiology, The Second Affiliated Hospital of Dalian Medical University, Dalian, Liaoning, China
| | - Yuxin Lin
- Department of Radiology, The Second Affiliated Hospital of Dalian Medical University, Dalian, Liaoning, China
| | - Chang Liang
- Department of Radiology, The Second Affiliated Hospital of Dalian Medical University, Dalian, Liaoning, China
| | - Tang Liu
- Department of Radiology, The Second Affiliated Hospital of Dalian Medical University, Dalian, Liaoning, China
- *Correspondence: Tang Liu, ; Yiping Zhao,
| | - Yiping Zhao
- Department of Radiology, The Second Affiliated Hospital of Dalian Medical University, Dalian, Liaoning, China
- *Correspondence: Tang Liu, ; Yiping Zhao,
| |
Collapse
|
22
|
Li C, Zhang H, Chen J, Shao S, Li X, Yao M, Zheng Y, Wu R, Shi J. Deep learning radiomics of ultrasonography for differentiating sclerosing adenosis from breast cancer. Clin Hemorheol Microcirc 2022:CH221608. [PMID: 36373313 DOI: 10.3233/ch-221608] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVES: The purpose of our study is to present a method combining radiomics with deep learning and clinical data for improved differential diagnosis of sclerosing adenosis (SA)and breast cancer (BC). METHODS: A total of 97 patients with SA and 100 patients with BC were included in this study. The best model for classification was selected from among four different convolutional neural network (CNN) models, including Vgg16, Resnet18, Resnet50, and Desenet121. The intra-/inter-class correlation coefficient and least absolute shrinkage and selection operator method were used for radiomics feature selection. The clinical features selected were patient age and nodule size. The overall accuracy, sensitivity, specificity, Youden index, positive predictive value, negative predictive value, and area under curve (AUC) value were calculated for comparison of diagnostic efficacy. RESULTS: All the CNN models combined with radiomics and clinical data were significantly superior to CNN models only. The Desenet121+radiomics+clinical data model showed the best classification performance with an accuracy of 86.80%, sensitivity of 87.60%, specificity of 86.20% and AUC of 0.915, which was better than that of the CNN model only, which had an accuracy of 85.23%, sensitivity of 85.48%, specificity of 85.02%, and AUC of 0.870. In comparison, the diagnostic accuracy, sensitivity, specificity, and AUC value for breast radiologists were 72.08%, 100%, 43.30%, and 0.716, respectively. CONCLUSIONS: A combination of the CNN-radiomics model and clinical data could be a helpful auxiliary diagnostic tool for distinguishing between SA and BC.
Collapse
Affiliation(s)
- Chunxiao Li
- Department of Ultra sound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Hongkou District, Shanghai, China
| | - Huili Zhang
- School of Communication and Information Engineering, Shanghai University, Baoshan District, Shanghai, China
| | - Jing Chen
- Department of Ultra sound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Hongkou District, Shanghai, China
| | - Sihui Shao
- Department of Ultra sound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Hongkou District, Shanghai, China
| | - Xin Li
- Department of Ultra sound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Hongkou District, Shanghai, China
| | - Minghua Yao
- Department of Ultra sound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Hongkou District, Shanghai, China
| | - Yi Zheng
- Department of Ultra sound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Hongkou District, Shanghai, China
| | - Rong Wu
- Department of Ultra sound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Hongkou District, Shanghai, China
| | - Jun Shi
- School of Communication and Information Engineering, Shanghai University, Baoshan District, Shanghai, China
| |
Collapse
|
23
|
Yin H, Bai L, Jia H, Lin G. Noninvasive assessment of breast cancer molecular subtypes on multiparametric MRI using convolutional neural network with transfer learning. Thorac Cancer 2022; 13:3183-3191. [PMID: 36203226 PMCID: PMC9663668 DOI: 10.1111/1759-7714.14673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 09/12/2022] [Accepted: 09/13/2022] [Indexed: 01/07/2023] Open
Abstract
BACKGROUND To evaluate the performances of multiparametric MRI-based convolutional neural networks (CNNs) for the preoperative assessment of breast cancer molecular subtypes. METHODS A total of 136 patients with 136 pathologically confirmed invasive breast cancers were randomly divided into training, validation, and testing sets in this retrospective study. The CNN models were established based on contrast-enhanced T1 -weighted imaging (T1 C), Apparent diffusion coefficient (ADC), and T2 -weighted imaging (T2 W) using the training and validation sets. The performances of CNN models were evaluated on the testing set. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy were calculated to assess the performance. RESULTS For the separation of each subtype from other subtypes on the testing set, the T1 C-based models yielded AUCs from 0.762 to 0.920; the ADC-based models yielded AUCs from 0.686 to 0.851; and the T2 W-based models achieved AUCs from 0.639 to 0.697. CONCLUSION T1 C-based models performed better than ADC-based models and T2 W-based models in assessing the breast cancer molecular subtypes. The discriminating performances of our CNN models for triple negative and human epidermal growth factor receptor 2-enriched subtypes were better than that of luminal A and luminal B subtypes.
Collapse
Affiliation(s)
- Haolin Yin
- Department of RadiologyHuadong Hospital Affiliated to Fudan UniversityShanghaiChina
| | - Lutian Bai
- Department of RadiologyHuadong Hospital Affiliated to Fudan UniversityShanghaiChina
| | - Huihui Jia
- Department of RadiologyHuadong Hospital Affiliated to Fudan UniversityShanghaiChina
| | - Guangwu Lin
- Department of RadiologyHuadong Hospital Affiliated to Fudan UniversityShanghaiChina
| |
Collapse
|
24
|
Jena B, Saxena S, Nayak GK, Balestrieri A, Gupta N, Khanna NN, Laird JR, Kalra MK, Fouda MM, Saba L, Suri JS. Brain Tumor Characterization Using Radiogenomics in Artificial Intelligence Framework. Cancers (Basel) 2022; 14:4052. [PMID: 36011048 PMCID: PMC9406706 DOI: 10.3390/cancers14164052] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 08/17/2022] [Accepted: 08/19/2022] [Indexed: 11/16/2022] Open
Abstract
Brain tumor characterization (BTC) is the process of knowing the underlying cause of brain tumors and their characteristics through various approaches such as tumor segmentation, classification, detection, and risk analysis. The substantial brain tumor characterization includes the identification of the molecular signature of various useful genomes whose alteration causes the brain tumor. The radiomics approach uses the radiological image for disease characterization by extracting quantitative radiomics features in the artificial intelligence (AI) environment. However, when considering a higher level of disease characteristics such as genetic information and mutation status, the combined study of "radiomics and genomics" has been considered under the umbrella of "radiogenomics". Furthermore, AI in a radiogenomics' environment offers benefits/advantages such as the finalized outcome of personalized treatment and individualized medicine. The proposed study summarizes the brain tumor's characterization in the prospect of an emerging field of research, i.e., radiomics and radiogenomics in an AI environment, with the help of statistical observation and risk-of-bias (RoB) analysis. The PRISMA search approach was used to find 121 relevant studies for the proposed review using IEEE, Google Scholar, PubMed, MDPI, and Scopus. Our findings indicate that both radiomics and radiogenomics have been successfully applied aggressively to several oncology applications with numerous advantages. Furthermore, under the AI paradigm, both the conventional and deep radiomics features have made an impact on the favorable outcomes of the radiogenomics approach of BTC. Furthermore, risk-of-bias (RoB) analysis offers a better understanding of the architectures with stronger benefits of AI by providing the bias involved in them.
Collapse
Affiliation(s)
- Biswajit Jena
- Department of CSE, International Institute of Information Technology, Bhubaneswar 751003, India
| | - Sanjay Saxena
- Department of CSE, International Institute of Information Technology, Bhubaneswar 751003, India
| | - Gopal Krishna Nayak
- Department of CSE, International Institute of Information Technology, Bhubaneswar 751003, India
| | | | - Neha Gupta
- Department of IT, Bharati Vidyapeeth’s College of Engineering, New Delhi 110056, India
| | - Narinder N. Khanna
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi 110076, India
| | - John R. Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St. Helena, CA 94574, USA
| | - Manudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA 02114, USA
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA
| | - Luca Saba
- Department of Radiology, AOU, University of Cagliari, 09124 Cagliari, Italy
| | - Jasjit S. Suri
- Stroke Diagnosis and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA
| |
Collapse
|
25
|
Yue W, Zhang H, Zhou J, Li G, Tang Z, Sun Z, Cai J, Tian N, Gao S, Dong J, Liu Y, Bai X, Sheng F. Deep learning-based automatic segmentation for size and volumetric measurement of breast cancer on magnetic resonance imaging. Front Oncol 2022; 12:984626. [PMID: 36033453 PMCID: PMC9404224 DOI: 10.3389/fonc.2022.984626] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Accepted: 07/19/2022] [Indexed: 11/30/2022] Open
Abstract
Purpose In clinical work, accurately measuring the volume and the size of breast cancer is significant to develop a treatment plan. However, it is time-consuming, and inter- and intra-observer variations among radiologists exist. The purpose of this study was to assess the performance of a Res-UNet convolutional neural network based on automatic segmentation for size and volumetric measurement of mass enhancement breast cancer on magnetic resonance imaging (MRI). Materials and methods A total of 1,000 female breast cancer patients who underwent preoperative 1.5-T dynamic contrast-enhanced MRI prior to treatment were selected from January 2015 to October 2021 and randomly divided into a training cohort (n = 800) and a testing cohort (n = 200). Compared with the masks named ground truth delineated manually by radiologists, the model performance on segmentation was evaluated with dice similarity coefficient (DSC) and intraclass correlation coefficient (ICC). The performance of tumor (T) stage classification was evaluated with accuracy, sensitivity, and specificity. Results In the test cohort, the DSC of automatic segmentation reached 0.89. Excellent concordance (ICC > 0.95) of the maximal and minimal diameter and good concordance (ICC > 0.80) of volumetric measurement were shown between the model and the radiologists. The trained model took approximately 10–15 s to provide automatic segmentation and classified the T stage with an overall accuracy of 0.93, sensitivity of 0.94, 0.94, and 0.75, and specificity of 0.95, 0.92, and 0.99, respectively, in T1, T2, and T3. Conclusions Our model demonstrated good performance and reliability for automatic segmentation for size and volumetric measurement of breast cancer, which can be time-saving and effective in clinical decision-making.
Collapse
Affiliation(s)
- Wenyi Yue
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
- Chinese PLA General Medical School, Beijing, China
| | - Hongtao Zhang
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Juan Zhou
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Guang Li
- Keya Medical Technology Co., Ltd., Beijing, China
| | - Zhe Tang
- Keya Medical Technology Co., Ltd., Beijing, China
| | - Zeyu Sun
- Keya Medical Technology Co., Ltd., Beijing, China
| | - Jianming Cai
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Ning Tian
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Shen Gao
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Jinghui Dong
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Yuan Liu
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Xu Bai
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
| | - Fugeng Sheng
- Department of Radiology, The Fifth Medical Center of Chinese PLA General Hospital, Beijing, China
- *Correspondence: Fugeng Sheng,
| |
Collapse
|
26
|
Li C, Huang H, Chen Y, Shao S, Chen J, Wu R, Zhang Q. Preoperative Non-Invasive Prediction of Breast Cancer Molecular Subtypes With a Deep Convolutional Neural Network on Ultrasound Images. Front Oncol 2022; 12:848790. [PMID: 35924158 PMCID: PMC9339685 DOI: 10.3389/fonc.2022.848790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 06/22/2022] [Indexed: 11/27/2022] Open
Abstract
Purpose This study aimed to develop a deep convolutional neural network (DCNN) model to classify molecular subtypes of breast cancer from ultrasound (US) images together with clinical information. Methods A total of 1,012 breast cancer patients with 2,284 US images (center 1) were collected as the main cohort for training and internal testing. Another cohort of 117 breast cancer cases with 153 US images (center 2) was used as the external testing cohort. Patients were grouped according to thresholds of nodule sizes of 20 mm and age of 50 years. The DCNN models were constructed based on US images and the clinical information to predict the molecular subtypes of breast cancer. A Breast Imaging-Reporting and Data System (BI-RADS) lexicon model was built on the same data based on morphological and clinical description parameters for diagnostic performance comparison. The diagnostic performance was assessed through the accuracy, sensitivity, specificity, Youden’s index (YI), and area under the receiver operating characteristic curve (AUC). Results Our DCNN model achieved better diagnostic performance than the BI-RADS lexicon model in differentiating molecular subtypes of breast cancer in both the main cohort and external testing cohort (all p < 0.001). In the main cohort, when classifying luminal A from non-luminal A subtypes, our model obtained an AUC of 0.776 (95% CI, 0.649–0.885) for patients older than 50 years and 0.818 (95% CI, 0.726–0.902) for those with tumor sizes ≤20 mm. For young patients ≤50 years, the AUC value of our model for detecting triple-negative breast cancer was 0.712 (95% CI, 0.538–0.874). In the external testing cohort, when classifying luminal A from non-luminal A subtypes for patients older than 50 years, our DCNN model achieved an AUC of 0.686 (95% CI, 0.567–0.806). Conclusions We employed a DCNN model to predict the molecular subtypes of breast cancer based on US images. Our model can be valuable depending on the patient’s age and nodule sizes.
Collapse
Affiliation(s)
- Chunxiao Li
- Department of Ultrasound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Haibo Huang
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Ying Chen
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Sihui Shao
- Department of Ultrasound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jing Chen
- Department of Ultrasound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Rong Wu
- Department of Ultrasound, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- *Correspondence: Rong Wu, ; Qi Zhang,
| | - Qi Zhang
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, School of Communication and Information Engineering, Shanghai University, Shanghai, China
- *Correspondence: Rong Wu, ; Qi Zhang,
| |
Collapse
|
27
|
Bhowmik A, Eskreis-Winkler S. Deep learning in breast imaging. BJR Open 2022; 4:20210060. [PMID: 36105427 PMCID: PMC9459862 DOI: 10.1259/bjro.20210060] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 04/04/2022] [Accepted: 04/21/2022] [Indexed: 11/22/2022] Open
Abstract
Millions of breast imaging exams are performed each year in an effort to reduce the morbidity and mortality of breast cancer. Breast imaging exams are performed for cancer screening, diagnostic work-up of suspicious findings, evaluating extent of disease in recently diagnosed breast cancer patients, and determining treatment response. Yet, the interpretation of breast imaging can be subjective, tedious, time-consuming, and prone to human error. Retrospective and small reader studies suggest that deep learning (DL) has great potential to perform medical imaging tasks at or above human-level performance, and may be used to automate aspects of the breast cancer screening process, improve cancer detection rates, decrease unnecessary callbacks and biopsies, optimize patient risk assessment, and open up new possibilities for disease prognostication. Prospective trials are urgently needed to validate these proposed tools, paving the way for real-world clinical use. New regulatory frameworks must also be developed to address the unique ethical, medicolegal, and quality control issues that DL algorithms present. In this article, we review the basics of DL, describe recent DL breast imaging applications including cancer detection and risk prediction, and discuss the challenges and future directions of artificial intelligence-based systems in the field of breast cancer.
Collapse
Affiliation(s)
- Arka Bhowmik
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Sarah Eskreis-Winkler
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| |
Collapse
|
28
|
Sun S, Mutasa S, Liu MZ, Nemer J, Sun M, Siddique M, Desperito E, Jambawalikar S, Ha RS. Deep learning prediction of axillary lymph node status using ultrasound images. Comput Biol Med 2022; 143:105250. [PMID: 35114444 DOI: 10.1016/j.compbiomed.2022.105250] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 01/18/2022] [Accepted: 01/19/2022] [Indexed: 12/11/2022]
Abstract
OBJECTIVE To investigate the ability of our convolutional neural network (CNN) to predict axillary lymph node metastasis using primary breast cancer ultrasound (US) images. METHODS In this IRB-approved study, 338 US images (two orthogonal images) from 169 patients from 1/2014-12/2016 were used. Suspicious lymph nodes were seen on US and patients subsequently underwent core-biopsy. 64 patients had metastatic lymph nodes. A custom CNN was utilized on 248 US images from 124 patients in the training dataset and tested on 90 US images from 45 patients. The CNN was implemented entirely of 3 × 3 convolutional kernels and linear layers. The 9 convolutional kernels consisted of 6 residual layers, totaling 12 convolutional layers. Feature maps were down-sampled using strided convolutions. Dropout with a 0.5 keep probability and L2 normalization was utilized. Training was implemented by using the Adam optimizer and a final SoftMax score threshold of 0.5 from the average of raw logits from each pixel was used for two class classification (metastasis or not). RESULTS Our CNN achieved an AUC of 0.72 (SD ± 0.08) in predicting axillary lymph node metastasis from US images in the testing dataset. The model had an accuracy of 72.6% (SD ± 8.4) with a sensitivity and specificity of 65.5% (SD ± 28.6) and 78.9% (SD ± 15.1) respectively. Our algorithm is available to be shared for research use. (https://github.com/stmutasa/MetUS). CONCLUSION It's feasible to predict axillary lymph node metastasis from US images using a deep learning technique. This can potentially aid nodal staging in patients with breast cancer.
Collapse
Affiliation(s)
- Shawn Sun
- Department of Radiology, Columbia University Medical Center, 622 West 168th Street, PB-1-301, New York, NY, 10032, USA
| | - Simukayi Mutasa
- Department of Radiology, Columbia University Medical Center, 622 West 168th Street, PB-1-301, New York, NY, 10032, USA
| | - Michael Z Liu
- Department of Radiology, Columbia University Medical Center, 622 West 168th Street, PB-1-301, New York, NY, 10032, USA
| | | | - Mary Sun
- Department of Radiology, Columbia University Medical Center, 622 West 168th Street, PB-1-301, New York, NY, 10032, USA
| | - Maham Siddique
- Department of Radiology, Columbia University Medical Center, 622 West 168th Street, PB-1-301, New York, NY, 10032, USA
| | - Elise Desperito
- Department of Radiology, Columbia University Medical Center, 622 West 168th Street, PB-1-301, New York, NY, 10032, USA
| | - Sachin Jambawalikar
- Department of Radiology, Columbia University Medical Center, 622 West 168th Street, PB-1-301, New York, NY, 10032, USA
| | - Richard S Ha
- Breast Imaging Section Columbia University Medical Center, 622 West 168th Street, PB-1-301, New York, NY, 10032, USA.
| |
Collapse
|
29
|
Balkenende L, Teuwen J, Mann RM. Application of Deep Learning in Breast Cancer Imaging. Semin Nucl Med 2022; 52:584-596. [PMID: 35339259 DOI: 10.1053/j.semnuclmed.2022.02.003] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 02/15/2022] [Accepted: 02/16/2022] [Indexed: 11/11/2022]
Abstract
This review gives an overview of the current state of deep learning research in breast cancer imaging. Breast imaging plays a major role in detecting breast cancer at an earlier stage, as well as monitoring and evaluating breast cancer during treatment. The most commonly used modalities for breast imaging are digital mammography, digital breast tomosynthesis, ultrasound and magnetic resonance imaging. Nuclear medicine imaging techniques are used for detection and classification of axillary lymph nodes and distant staging in breast cancer imaging. All of these techniques are currently digitized, enabling the possibility to implement deep learning (DL), a subset of Artificial intelligence, in breast imaging. DL is nowadays embedded in a plethora of different tasks, such as lesion classification and segmentation, image reconstruction and generation, cancer risk prediction, and prediction and assessment of therapy response. Studies show similar and even better performances of DL algorithms compared to radiologists, although it is clear that large trials are needed, especially for ultrasound and magnetic resonance imaging, to exactly determine the added value of DL in breast cancer imaging. Studies on DL in nuclear medicine techniques are only sparsely available and further research is mandatory. Legal and ethical issues need to be considered before the role of DL can expand to its full potential in clinical breast care practice.
Collapse
Affiliation(s)
- Luuk Balkenende
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Department of Radiation Oncology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands
| | - Ritse M Mann
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands.
| |
Collapse
|
30
|
Nassif AB, Talib MA, Nasir Q, Afadar Y, Elgendy O. Breast cancer detection using artificial intelligence techniques: A systematic literature review. Artif Intell Med 2022; 127:102276. [DOI: 10.1016/j.artmed.2022.102276] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 10/18/2021] [Accepted: 03/04/2022] [Indexed: 02/07/2023]
|
31
|
Ueda D, Yamamoto A, Takashima T, Onoda N, Noda S, Kashiwagi S, Morisaki T, Honjo T, Shimazaki A, Miki Y. Training, Validation, and Test of Deep Learning Models for Classification of Receptor Expressions in Breast Cancers From Mammograms. JCO Precis Oncol 2022; 5:543-551. [PMID: 34994603 DOI: 10.1200/po.20.00176] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
PURPOSE The molecular subtype of breast cancer is an important component of establishing the appropriate treatment strategy. In clinical practice, molecular subtypes are determined by receptor expressions. In this study, we developed a model using deep learning to determine receptor expressions from mammograms. METHODS A developing data set and a test data set were generated from mammograms from the affected side of patients who were pathologically diagnosed with breast cancer from January 2006 through December 2016 and from January 2017 through December 2017, respectively. The developing data sets were used to train and validate the DL-based model with five-fold cross-validation for classifying expression of estrogen receptor (ER), progesterone receptor (PgR), and human epidermal growth factor receptor 2-neu (HER2). The area under the curves (AUCs) for each receptor were evaluated with the independent test data set. RESULTS The developing data set and the test data set included 1,448 images (997 ER-positive and 386 ER-negative, 641 PgR-positive and 695 PgR-negative, and 220 HER2-enriched and 1,109 non-HER2-enriched) and 225 images (176 ER-positive and 40 ER-negative, 101 PgR-positive and 117 PgR-negative, and 53 HER2-enriched and 165 non-HER2-enriched), respectively. The AUC of ER-positive or -negative in the test data set was 0.67 (0.58-0.76), the AUC of PgR-positive or -negative was 0.61 (0.53-0.68), and the AUC of HER2-enriched or non-HER2-enriched was 0.75 (0.68-0.82). CONCLUSION The DL-based model effectively classified the receptor expressions from the mammograms. Applying the DL-based model to predict breast cancer classification with a noninvasive approach would have additive value to patients.
Collapse
Affiliation(s)
- Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Osaka City University Graduate School of Medicine, Osaka, Japan
| | - Akira Yamamoto
- Department of Diagnostic and Interventional Radiology, Osaka City University Graduate School of Medicine, Osaka, Japan
| | - Tsutomu Takashima
- Department of Breast and Endocrine Surgery, Osaka City University Graduate School of Medicine, Osaka, Japan
| | - Naoyoshi Onoda
- Department of Breast and Endocrine Surgery, Osaka City University Graduate School of Medicine, Osaka, Japan
| | - Satoru Noda
- Department of Breast and Endocrine Surgery, Osaka City University Graduate School of Medicine, Osaka, Japan
| | - Shinichiro Kashiwagi
- Department of Breast and Endocrine Surgery, Osaka City University Graduate School of Medicine, Osaka, Japan
| | - Tamami Morisaki
- Department of Breast and Endocrine Surgery, Osaka City University Graduate School of Medicine, Osaka, Japan
| | - Takashi Honjo
- Department of Diagnostic and Interventional Radiology, Osaka City University Graduate School of Medicine, Osaka, Japan
| | - Akitoshi Shimazaki
- Department of Diagnostic and Interventional Radiology, Osaka City University Graduate School of Medicine, Osaka, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Osaka City University Graduate School of Medicine, Osaka, Japan
| |
Collapse
|
32
|
Liu Q, Hu P. Extendable and explainable deep learning for pan-cancer radiogenomics research. Curr Opin Chem Biol 2022; 66:102111. [PMID: 34999476 DOI: 10.1016/j.cbpa.2021.102111] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 12/06/2021] [Accepted: 12/13/2021] [Indexed: 12/12/2022]
Abstract
Radiogenomics is a field where medical images and genomic profiles are jointly analyzed to answer critical clinical questions. Specifically, people want to identify non-invasive imaging biomarkers that are associated with both genomic features and clinical outcomes. Deep learning is an advanced computer science technique that has been applied in many fields, including medical image and genomic data analysis. This review summarizes the current state of deep learning in pan-cancer radiogenomic research, discusses its limitations, and indicates the potential future directions. Traditional machine learning in radiomics, genomics, and radiogenomics have also been briefly discussed. We also summarize the main pan-cancer radiogenomic research resources. Two characteristics of deep learning are emphasized when discussing its application to pan-cancer radiogenomics, which are extendibility and explainability.
Collapse
Affiliation(s)
- Qian Liu
- Department of Biochemistry and Medical Genetics, University of Manitoba, Winnipeg, Manitoba, R3E 0W3, Canada; Department of Computer Science, University of Manitoba, Winnipeg, Manitoba, R3E 0W3, Canada; Department of Statistics, University of Manitoba, Winnipeg, Manitoba, R3E 0W3, Canada.
| | - Pingzhao Hu
- Department of Biochemistry and Medical Genetics, University of Manitoba, Winnipeg, Manitoba, R3E 0W3, Canada; Department of Computer Science, University of Manitoba, Winnipeg, Manitoba, R3E 0W3, Canada.
| |
Collapse
|
33
|
Monib S. Artificial Intelligence in Breast Disease Management: No Innovation Without Evaluation. Indian J Surg 2021. [DOI: 10.1007/s12262-020-02682-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
|
34
|
Prediction of HER2 expression in breast cancer by combining PET/CT radiomic analysis and machine learning. Ann Nucl Med 2021; 36:172-182. [PMID: 34716873 DOI: 10.1007/s12149-021-01688-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 10/20/2021] [Indexed: 12/15/2022]
Abstract
BACKGROUND Human epidermal growth factor receptor 2 (HER2) expression status determination significantly contributes to HER2-targeted therapy in breast cancer (BC). The purpose of this study was to evaluate the role of radiomics and machine learning based on PET/CT images in HER2 status prediction, and to identify the most effective combination of machine learning model and radiomic features. METHODS A total of 217 BC patients who underwent PET/CT examination were involved in the study and randomly divided into a training set (n = 151) and a testing set (n = 66). For all four models, the model parameters were determined using a threefold cross-validation in the training set. Each model's performance was evaluated on the independent testing set using the receiver operating characteristic (ROC) curve, and AUC was calculated to get a quantified performance measurement of each model. RESULTS Among the four developed machine learning models, the XGBoost model outperformed other machine learning models in HER2 status prediction. Furthermore, compared to the XGBoost model based on PET alone or CT alone radiomic features, the predictive power for HER2 status by using XGBoost model based on PET/CTmean or PET/CTconcat radiomic fusion features was dramatically improved with an AUC of 0.76 (95% confidence interval [CI] 0.69-0.83) and 0.72 (0.65-0.80), respectively. CONCLUSIONS The established machine learning classifier based on PET/CT radiomic features is potentially predictive of HER2 status in BC.
Collapse
|
35
|
Davey MG, Davey MS, Boland MR, Ryan ÉJ, Lowery AJ, Kerin MJ. Radiomic differentiation of breast cancer molecular subtypes using pre-operative breast imaging - A systematic review and meta-analysis. Eur J Radiol 2021; 144:109996. [PMID: 34624649 DOI: 10.1016/j.ejrad.2021.109996] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 09/17/2021] [Accepted: 09/30/2021] [Indexed: 01/12/2023]
Abstract
INTRODUCTION Breast cancer has four distinct molecular subtypes which are discriminated using gene expression profiling following biopsy. Radiogenomics is an emerging field which utilises diagnostic imaging to reveal genomic properties of disease. We aimed to perform a systematic review of the current literature to evaluate the value radiomics in differentiating breast cancers into their molecular subtypes using diagnostic imaging. METHODS A systematic review was performed as per PRISMA guidelines. Studies assessing radiomictumour analysis in differentiatingbreast cancer molecular subtypeswere included. Quality was assessed using the radiomics quality score (RQS). Diagnostic sensitivity and specificity of radiomic analyses were included for meta-analysis; Study specific sensitivity and specificity were retrieved and summary ROC analysis were performed to compile pooled sensitivities and specificities. RESULTS Forty-one studies were included. Overall, there were 10,090 female patients (mean age of 47.6 ± 11.7 years, range: 21-93) and molecular subtypewas reported in 7,693 of cases, with Luminal A (LABC), Luminal B (LBBC), Human Epidermal Growth Factor Receptor-2 overexpressing (HER2+), and Triple Negative (TNBC) breast cancers representing 51.3%, 19.9%, 12.3% and 16.3% of tumour respectively. Seven studies provided radiomic analysis to determine molecular subtypes using mammography to differentiateTNBCvs.others (sensitivity: 0.82,specificity:0.79). Thirty-five studies reported on radiomic analysis of magnetic resonance imaging (MRI); LABC versus others(sensitivity:0.78,specificity:0.83),HER2+versusothers(sensitivity:0.87,specificity:0.88), andLBBCversusTNBC (sensitivity: 0.79,specificity:0.88) respectively. CONCLUSION Radiomic tumour assessment of contemporary breast imaging provide a novel option in determining breast cancer molecular subtypes. However, amelioration of such techniques are required and genetic expression assessment will remain the gold standard.
Collapse
Affiliation(s)
- Matthew G Davey
- The Lambe Institute for Translational Research, National University of Ireland, Galway H91 YR91, Ireland.
| | - Martin S Davey
- The Lambe Institute for Translational Research, National University of Ireland, Galway H91 YR91, Ireland
| | - Michael R Boland
- The Lambe Institute for Translational Research, National University of Ireland, Galway H91 YR91, Ireland
| | - Éanna J Ryan
- The Lambe Institute for Translational Research, National University of Ireland, Galway H91 YR91, Ireland
| | - Aoife J Lowery
- The Lambe Institute for Translational Research, National University of Ireland, Galway H91 YR91, Ireland
| | - Michael J Kerin
- The Lambe Institute for Translational Research, National University of Ireland, Galway H91 YR91, Ireland
| |
Collapse
|
36
|
Huang Y, Wei L, Hu Y, Shao N, Lin Y, He S, Shi H, Zhang X, Lin Y. Multi-Parametric MRI-Based Radiomics Models for Predicting Molecular Subtype and Androgen Receptor Expression in Breast Cancer. Front Oncol 2021; 11:706733. [PMID: 34490107 PMCID: PMC8416497 DOI: 10.3389/fonc.2021.706733] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Accepted: 07/28/2021] [Indexed: 12/30/2022] Open
Abstract
Objective To investigate whether radiomics features extracted from multi-parametric MRI combining machine learning approach can predict molecular subtype and androgen receptor (AR) expression of breast cancer in a non-invasive way. Materials and Methods Patients diagnosed with clinical T2–4 stage breast cancer from March 2016 to July 2020 were retrospectively enrolled. The molecular subtypes and AR expression in pre-treatment biopsy specimens were assessed. A total of 4,198 radiomics features were extracted from the pre-biopsy multi-parametric MRI (including dynamic contrast-enhancement T1-weighted images, fat-suppressed T2-weighted images, and apparent diffusion coefficient map) of each patient. We applied several feature selection strategies including the least absolute shrinkage and selection operator (LASSO), and recursive feature elimination (RFE), the maximum relevance minimum redundancy (mRMR), Boruta and Pearson correlation analysis, to select the most optimal features. We then built 120 diagnostic models using distinct classification algorithms and feature sets divided by MRI sequences and selection strategies to predict molecular subtype and AR expression of breast cancer in the testing dataset of leave-one-out cross-validation (LOOCV). The performances of binary classification models were assessed via the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). And the performances of multiclass classification models were assessed via AUC, overall accuracy, precision, recall rate, and F1-score. Results A total of 162 patients (mean age, 46.91 ± 10.08 years) were enrolled in this study; 30 were low-AR expression and 132 were high-AR expression. HR+/HER2− cancers were diagnosed in 56 cases (34.6%), HER2+ cancers in 81 cases (50.0%), and TNBC in 25 patients (15.4%). There was no significant difference in clinicopathologic characteristics between low-AR and high-AR groups (P > 0.05), except the menopausal status, ER, PR, HER2, and Ki-67 index (P = 0.043, <0.001, <0.001, 0.015, and 0.006, respectively). No significant difference in clinicopathologic characteristics was observed among three molecular subtypes except the AR status and Ki-67 (P = <0.001 and 0.012, respectively). The Multilayer Perceptron (MLP) showed the best performance in discriminating AR expression, with an AUC of 0.907 and an accuracy of 85.8% in the testing dataset. The highest performances were obtained for discriminating TNBC vs. non-TNBC (AUC: 0.965, accuracy: 92.6%), HER2+ vs. HER2− (AUC: 0.840, accuracy: 79.0%), and HR+/HER2− vs. others (AUC: 0.860, accuracy: 82.1%) using MLP as well. The micro-AUC of MLP multiclass classification model was 0.896, and the overall accuracy was 0.735. Conclusions Multi-parametric MRI-based radiomics combining with machine learning approaches provide a promising method to predict the molecular subtype and AR expression of breast cancer non-invasively.
Collapse
Affiliation(s)
- Yuhong Huang
- Breast Disease Center, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Lihong Wei
- Department of Pathology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yalan Hu
- Breast Disease Center, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Nan Shao
- Breast Disease Center, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yingyu Lin
- Department of Radiology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Shaofu He
- Department of Radiology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Huijuan Shi
- Department of Pathology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Xiaoling Zhang
- Department of Radiology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Ying Lin
- Breast Disease Center, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
37
|
Sun R, Meng Z, Hou X, Chen Y, Yang Y, Huang G, Nie S. Prediction of breast cancer molecular subtypes using DCE-MRI based on CNNs combined with ensemble learning. Phys Med Biol 2021; 66. [PMID: 34330117 DOI: 10.1088/1361-6560/ac195a] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 07/30/2021] [Indexed: 12/15/2022]
Abstract
To design an ensemble learning based prediction model using different breast DCE-MR post-contrast sequence images to distinguish two kinds of breast cancer subtypes (luminal and non-luminal). We retrospectively studied preoperative dynamic contrast enhanced-magnetic resonance imaging and molecular information of 266 breast cancer cases with either luminal subtype (luminal A and luminal B) or non-luminal subtype (human epidermal growth factor receptor 2 and triple negative). Then, multiple bounding boxes covering tumor lesions were acquired from three series of post-contrast DCE-MR sequence images which were determined by radiologists. Afterwards, three baseline convolutional neural networks (CNNs) with same architecture were concurrently trained, followed by preliminary prediction of probabilities from the testing database. Finally, the classification and evaluation of breast subtypes were realized by means of fusing predicted results from three CNNs employed via ensemble learning based on weighted voting. Taking advantage of 5-fold cross validation CV, the average prediction specificity, accuracy, precision and area under the ROC curve on testing dataset for the luminal versus non-luminal are 0.958, 0.852, 0.961, and 0.867, respectively, which empirically demonstrate that our proposed ensemble model has highly reliability and robustness. The breast DCE-MR post-contrast sequence image analysis utilizing the ensemble CNN model based on deep learning could show a valuable and extendible practical application on breast molecular subtype identification.
Collapse
Affiliation(s)
- Rong Sun
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, People's Republic of China
| | - Zijun Meng
- School of Information Engineering, China Jiliang University, Hangzhou, People's Republic of China
| | - Xuewen Hou
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, People's Republic of China
| | - Yang Chen
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, People's Republic of China
| | - Yifeng Yang
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, People's Republic of China
| | - Gang Huang
- Shanghai University of Medicine and Health Sciences, Shanghai, People's Republic of China
| | - Shengdong Nie
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, People's Republic of China
| |
Collapse
|
38
|
Insights Into Systemic Sclerosis from Gene Expression Profiling. CURRENT TREATMENT OPTIONS IN RHEUMATOLOGY 2021. [DOI: 10.1007/s40674-021-00183-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
39
|
Galili B, Samohi S, Yakhini Z. On the stability of log-rank test under labeling errors. Bioinformatics 2021; 37:4451-4459. [PMID: 34255820 PMCID: PMC8652036 DOI: 10.1093/bioinformatics/btab495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 06/25/2021] [Accepted: 07/02/2021] [Indexed: 11/13/2022] Open
Abstract
Motivation Log-rank test is a widely used test that serves to assess the statistical significance
of observed differences in survival, when comparing two or more groups. The log-rank
test is based on several assumptions that support the validity of the calculations. It
is naturally assumed, implicitly, that no errors occur in the labeling of the samples.
That is, the mapping between samples and groups is perfectly correct. In this work, we
investigate how test results may be affected when considering some errors in the
original labeling. Results We introduce and define the uncertainty that arises from labeling errors in log-rank
test. In order to deal with this uncertainty, we develop a novel algorithm for
efficiently calculating a stability interval around the original log-rank
P-value and prove its correctness. We demonstrate our algorithm on
several datasets. Availability and implementation We provide a Python implementation, called LoRSI, for calculating the stability
interval using our algorithm https://github.com/YakhiniGroup/LoRSI. Supplementary information Supplementary data are
available at Bioinformatics online.
Collapse
Affiliation(s)
- Ben Galili
- Computer Science Department, Technion-Israel Institute of Technology, Haifa, Israel
| | - Samohi Samohi
- Arazi School of Computer Science, Interdisciplinary Center, Herzliya, Israel
| | - Zohar Yakhini
- Computer Science Department, Technion-Israel Institute of Technology, Haifa, Israel.,Arazi School of Computer Science, Interdisciplinary Center, Herzliya, Israel
| |
Collapse
|
40
|
Franceschini G, Mason EJ, Orlandi A, D'Archi S, Sanchez AM, Masetti R. How will artificial intelligence impact breast cancer research efficiency? Expert Rev Anticancer Ther 2021; 21:1067-1070. [PMID: 34214007 DOI: 10.1080/14737140.2021.1951240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Gianluca Franceschini
- Multidisciplinary Breast Center, Dipartimento Scienze della Salute della Donna e del Bambino e di Sanità Pubblica, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Elena Jane Mason
- Multidisciplinary Breast Center, Dipartimento Scienze della Salute della Donna e del Bambino e di Sanità Pubblica, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Armando Orlandi
- Division of Medical Oncology, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Sabatino D'Archi
- Multidisciplinary Breast Center, Dipartimento Scienze della Salute della Donna e del Bambino e di Sanità Pubblica, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Alejandro Martin Sanchez
- Multidisciplinary Breast Center, Dipartimento Scienze della Salute della Donna e del Bambino e di Sanità Pubblica, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| | - Riccardo Masetti
- Multidisciplinary Breast Center, Dipartimento Scienze della Salute della Donna e del Bambino e di Sanità Pubblica, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, Rome, Italy
| |
Collapse
|
41
|
Understanding artificial intelligence based radiology studies: CNN architecture. Clin Imaging 2021; 80:72-76. [PMID: 34256218 DOI: 10.1016/j.clinimag.2021.06.033] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 05/19/2021] [Accepted: 06/21/2021] [Indexed: 12/22/2022]
Abstract
Artificial intelligence (AI) in radiology has gained wide interest due to the development of neural network architectures with high performance in computer vision related tasks. As AI based software programs become more integrated into the clinical workflow, radiologists can benefit from better understanding the principles of artificial intelligence. This series aims to explain basic concepts of AI and its applications in medical imaging. In this article, we will review the background of neural network architecture and its application in imaging analysis.
Collapse
|
42
|
Hoshino I, Yokota H. Radiogenomics of gastroenterological cancer: The dawn of personalized medicine with artificial intelligence-based image analysis. Ann Gastroenterol Surg 2021; 5:427-435. [PMID: 34337291 PMCID: PMC8316732 DOI: 10.1002/ags3.12437] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/09/2020] [Revised: 11/29/2020] [Accepted: 01/08/2021] [Indexed: 12/14/2022] Open
Abstract
Radiogenomics is a new field of medical science that integrates two omics, radiomics and genomics, and may bring a major paradigm shift in traditional personalized medicine strategies that require tumor tissue samples. In addition, the acquisition of the data does not require special imaging equipment or special imaging conditions, and it is possible to use image information from computed tomography, magnetic resonance imaging, positron emission tomography-computed tomography in clinical practice, so the versatility and cost-effectiveness of radiogenomics are expected. So far, the field of radiogenomics has developed, especially in the fields of brain tumors and breast cancer, but recently, reports of radiogenomic research on gastroenterological cancer are increasing. This review provides an overview of radiogenomic research methods and summarizes the current radiogenomic research in gastroenterological cancer. In addition, the application of artificial intelligence is considered to be indispensable for the integrated analysis of enormous omics information in the future, and the future direction of this research, including the latest technologies, will be discussed.
Collapse
Affiliation(s)
- Isamu Hoshino
- Division of Gastroenterological SurgeryChiba Cancer CenterChibaJapan
| | - Hajime Yokota
- Department of Diagnostic Radiology and Radiation OncologyGraduate School of MedicineChiba UniversityChibaJapan
| |
Collapse
|
43
|
Smedley NF, Aberle DR, Hsu W. Using deep neural networks and interpretability methods to identify gene expression patterns that predict radiomic features and histology in non-small cell lung cancer. J Med Imaging (Bellingham) 2021; 8:031906. [PMID: 33977113 PMCID: PMC8105647 DOI: 10.1117/1.jmi.8.3.031906] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Accepted: 04/13/2021] [Indexed: 01/06/2023] Open
Abstract
Purpose: Integrative analysis combining diagnostic imaging and genomic information can uncover biological insights into lesions that are visible on radiologic images. We investigate techniques for interrogating a deep neural network trained to predict quantitative image (radiomic) features and histology from gene expression in non-small cell lung cancer (NSCLC). Approach: Using 262 training and 89 testing cases from two public datasets, deep feedforward neural networks were trained to predict the values of 101 computed tomography (CT) radiomic features and histology. A model interrogation method called gene masking was used to derive the learned associations between subsets of genes and a radiomic feature or histology class [adenocarcinoma (ADC), squamous cell, and other]. Results: Overall, neural networks outperformed other classifiers. In testing, neural networks classified histology with area under the receiver operating characteristic curves (AUCs) of 0.86 (ADC), 0.91 (squamous cell), and 0.71 (other). Classification performance of radiomics features ranged from 0.42 to 0.89 AUC. Gene masking analysis revealed new and previously reported associations. For example, hypoxia genes predicted histology ( > 0.90 AUC ). Previously published gene signatures for classifying histology were also predictive in our model ( > 0.80 AUC ). Gene sets related to the immune or cardiac systems and cell development processes were predictive ( > 0.70 AUC ) of several different radiomic features. AKT signaling, tumor necrosis factor, and Rho gene sets were each predictive of tumor textures. Conclusions: This work demonstrates neural networks' ability to map gene expressions to radiomic features and histology types in NSCLC and to interpret the models to identify predictive genes associated with each feature or type.
Collapse
Affiliation(s)
- Nova F Smedley
- University of California, Los Angeles, Department of Radiological Sciences, Los Angeles, California, United States.,University of California, Los Angeles, Department of Bioengineering, Los Angeles, California, United States
| | - Denise R Aberle
- University of California, Los Angeles, Department of Radiological Sciences, Los Angeles, California, United States.,University of California, Los Angeles, Department of Bioengineering, Los Angeles, California, United States
| | - William Hsu
- University of California, Los Angeles, Department of Radiological Sciences, Los Angeles, California, United States.,University of California, Los Angeles, Department of Bioengineering, Los Angeles, California, United States.,University of California, Los Angeles, Bioinformatics Interdepartmental Program, Los Angeles, California, United States
| |
Collapse
|
44
|
Eskreis-Winkler S, Onishi N, Pinker K, Reiner JS, Kaplan J, Morris EA, Sutton EJ. Using Deep Learning to Improve Nonsystematic Viewing of Breast Cancer on MRI. JOURNAL OF BREAST IMAGING 2021; 3:201-207. [PMID: 38424820 DOI: 10.1093/jbi/wbaa102] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2020] [Indexed: 03/02/2024]
Abstract
OBJECTIVE To investigate the feasibility of using deep learning to identify tumor-containing axial slices on breast MRI images. METHODS This IRB-approved retrospective study included consecutive patients with operable invasive breast cancer undergoing pretreatment breast MRI between January 1, 2014, and December 31, 2017. Axial tumor-containing slices from the first postcontrast phase were extracted. Each axial image was subdivided into two subimages: one of the ipsilateral cancer-containing breast and one of the contralateral healthy breast. Cases were randomly divided into training, validation, and testing sets. A convolutional neural network was trained to classify subimages into "cancer" and "no cancer" categories. Accuracy, sensitivity, and specificity of the classification system were determined using pathology as the reference standard. A two-reader study was performed to measure the time savings of the deep learning algorithm using descriptive statistics. RESULTS Two hundred and seventy-three patients with unilateral breast cancer met study criteria. On the held-out test set, accuracy of the deep learning system for tumor detection was 92.8% (648/706; 95% confidence interval: 89.7%-93.8%). Sensitivity and specificity were 89.5% and 94.3%, respectively. Readers spent 3 to 45 seconds to scroll to the tumor-containing slices without use of the deep learning algorithm. CONCLUSION In breast MR exams containing breast cancer, deep learning can be used to identify the tumor-containing slices. This technology may be integrated into the picture archiving and communication system to bypass scrolling when viewing stacked images, which can be helpful during nonsystematic image viewing, such as during interdisciplinary tumor board meetings.
Collapse
Affiliation(s)
| | - Natsuko Onishi
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
- University of California, Department of Radiology, San Francisco, CA
| | - Katja Pinker
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| | - Jeffrey S Reiner
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| | - Jennifer Kaplan
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| | - Elizabeth A Morris
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| | - Elizabeth J Sutton
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, NY
| |
Collapse
|
45
|
Shui L, Ren H, Yang X, Li J, Chen Z, Yi C, Zhu H, Shui P. The Era of Radiogenomics in Precision Medicine: An Emerging Approach to Support Diagnosis, Treatment Decisions, and Prognostication in Oncology. Front Oncol 2021; 10:570465. [PMID: 33575207 PMCID: PMC7870863 DOI: 10.3389/fonc.2020.570465] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Accepted: 12/08/2020] [Indexed: 02/05/2023] Open
Abstract
With the rapid development of new technologies, including artificial intelligence and genome sequencing, radiogenomics has emerged as a state-of-the-art science in the field of individualized medicine. Radiogenomics combines a large volume of quantitative data extracted from medical images with individual genomic phenotypes and constructs a prediction model through deep learning to stratify patients, guide therapeutic strategies, and evaluate clinical outcomes. Recent studies of various types of tumors demonstrate the predictive value of radiogenomics. And some of the issues in the radiogenomic analysis and the solutions from prior works are presented. Although the workflow criteria and international agreed guidelines for statistical methods need to be confirmed, radiogenomics represents a repeatable and cost-effective approach for the detection of continuous changes and is a promising surrogate for invasive interventions. Therefore, radiogenomics could facilitate computer-aided diagnosis, treatment, and prediction of the prognosis in patients with tumors in the routine clinical setting. Here, we summarize the integrated process of radiogenomics and introduce the crucial strategies and statistical algorithms involved in current studies.
Collapse
Affiliation(s)
- Lin Shui
- Department of Medical Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Haoyu Ren
- Department of General, Visceral and Transplantation Surgery, University Hospital, LMU Munich, Munich, Germany
| | - Xi Yang
- Department of Medical Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Jian Li
- Department of Pharmacy, The Affiliated Traditional Chinese Medicine Hospital of Southwest Medical University, Luzhou, China
| | - Ziwei Chen
- Department of Nephrology, Chengdu Integrated TCM and Western Medicine Hospital, Chengdu, China
| | - Cheng Yi
- Department of Medical Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Hong Zhu
- Department of Medical Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Pixian Shui
- School of Pharmacy, Southwest Medical University, Luzhou, China
| |
Collapse
|
46
|
Breath biopsy of breast cancer using sensor array signals and machine learning analysis. Sci Rep 2021; 11:103. [PMID: 33420275 PMCID: PMC7794369 DOI: 10.1038/s41598-020-80570-0] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 12/16/2020] [Indexed: 12/16/2022] Open
Abstract
Breast cancer causes metabolic alteration, and volatile metabolites in the breath of patients may be used to diagnose breast cancer. The objective of this study was to develop a new breath test for breast cancer by analyzing volatile metabolites in the exhaled breath. We collected alveolar air from breast cancer patients and non-cancer controls and analyzed the volatile metabolites with an electronic nose composed of 32 carbon nanotubes sensors. We used machine learning techniques to build prediction models for breast cancer and its molecular phenotyping. Between July 2016 and June 2018, we enrolled a total of 899 subjects. Using the random forest model, the prediction accuracy of breast cancer in the test set was 91% (95% CI: 0.85–0.95), sensitivity was 86%, specificity was 97%, positive predictive value was 97%, negative predictive value was 97%, the area under the receiver operating curve was 0.99 (95% CI: 0.99–1.00), and the kappa value was 0.83. The leave-one-out cross-validated discrimination accuracy and reliability of molecular phenotyping of breast cancer were 88.5 ± 12.1% and 0.77 ± 0.23, respectively. Breath tests with electronic noses can be applied intraoperatively to discriminate breast cancer and molecular subtype and support the medical staff to choose the best therapeutic decision.
Collapse
|
47
|
Liu T, Huang J, Liao T, Pu R, Liu S, Peng Y. A Hybrid Deep Learning Model for Predicting Molecular Subtypes of Human Breast Cancer Using Multimodal Data. Ing Rech Biomed 2021. [DOI: 10.1016/j.irbm.2020.12.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
48
|
Morgan MB, Mates JL. Applications of Artificial Intelligence in Breast Imaging. Radiol Clin North Am 2021; 59:139-148. [DOI: 10.1016/j.rcl.2020.08.007] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
49
|
Smedley NF, El-Saden S, Hsu W. Discovering and interpreting transcriptomic drivers of imaging traits using neural networks. Bioinformatics 2020; 36:3537-3548. [PMID: 32101278 DOI: 10.1093/bioinformatics/btaa126] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2019] [Revised: 01/07/2020] [Accepted: 02/19/2020] [Indexed: 12/20/2022] Open
Abstract
MOTIVATION Cancer heterogeneity is observed at multiple biological levels. To improve our understanding of these differences and their relevance in medicine, approaches to link organ- and tissue-level information from diagnostic images and cellular-level information from genomics are needed. However, these 'radiogenomic' studies often use linear or shallow models, depend on feature selection, or consider one gene at a time to map images to genes. Moreover, no study has systematically attempted to understand the molecular basis of imaging traits based on the interpretation of what the neural network has learned. These studies are thus limited in their ability to understand the transcriptomic drivers of imaging traits, which could provide additional context for determining clinical outcomes. RESULTS We present a neural network-based approach that takes high-dimensional gene expression data as input and performs non-linear mapping to an imaging trait. To interpret the models, we propose gene masking and gene saliency to extract learned relationships from radiogenomic neural networks. In glioblastoma patients, our models outperformed comparable classifiers (>0.10 AUC) and our interpretation methods were validated using a similar model to identify known relationships between genes and molecular subtypes. We found that tumor imaging traits had specific transcription patterns, e.g. edema and genes related to cellular invasion, and 10 radiogenomic traits were significantly predictive of survival. We demonstrate that neural networks can model transcriptomic heterogeneity to reflect differences in imaging and can be used to derive radiogenomic traits with clinical value. AVAILABILITY AND IMPLEMENTATION https://github.com/novasmedley/deepRadiogenomics. CONTACT whsu@mednet.ucla.edu. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Nova F Smedley
- Medical & Imaging Informatics.,Department of Radiological Sciences.,Department of Bioengineering
| | - Suzie El-Saden
- Medical & Imaging Informatics.,Department of Radiological Sciences
| | - William Hsu
- Medical & Imaging Informatics.,Department of Radiological Sciences.,Department of Bioengineering.,Bioinformatics IDP, University of California Los Angeles, Los Angeles, CA 90024, USA
| |
Collapse
|
50
|
Contrast-enhanced cone beam breast CT features of breast cancers: correlation with immunohistochemical receptors and molecular subtypes. Eur Radiol 2020; 31:2580-2589. [PMID: 33009590 DOI: 10.1007/s00330-020-07277-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 07/30/2020] [Accepted: 09/09/2020] [Indexed: 12/15/2022]
Abstract
OBJECTIVES To investigate the association of contrast-enhanced cone beam breast CT (CE-CBBCT) features, immunohistochemical (IHC) receptors, and molecular subtypes in breast cancer. METHODS In this retrospective study, patients who underwent preoperative CE-CBBCT and received complete IHC results were analyzed. CE-CBBCT features were evaluated by two radiologists. Observer reproducibility and feature reliability were assessed. The association between CE-CBBCT features, IHC receptors, and molecular subtypes was analyzed using the chi-square, Mann-Whitney, and Kruskal-Wallis tests. Multivariate logistic regression was performed to assess the ability of combined imaging features to discriminate molecular subtypes. ROC curve was used to evaluate prediction performance. RESULTS A total of 240 invasive cancers identified in 211 women were enrolled. Molecular subtypes of breast cancer were significantly associated with focality number of lesions, lesion type, tumor size, lesion density, internal enhancement pattern, degree of lesion enhancement (ΔHU), mass shape, spiculation, calcifications, calcification distribution, and increased peripheral vascularity of lesion (all p < 0.005), some of which also helped to differentiate IHC receptor status. A multivariate logistic regression model showed that tumor size (odds ratio, OR = 1.244), mass shape (OR = 0.311), spiculation (OR = 0.159), and internal enhancement pattern (OR = 0.227) were associated with differentiation between luminal and non-luminal subtypes (AUC = 0.809). Combined CE-CBBCT features, including lesion type (OR = 0.118), calcifications (OR = 0.181), and ΔHU (OR = 0.962), could be significant indicators of triple-negative versus HER-2-enriched subtypes (AUC = 0.913). CONCLUSIONS CE-CBBCT features have the potential to help predict IHC receptor status and distinguish molecular subtypes of breast cancer, which could in turn help to develop individual treatment decisions and prognosis predictions. KEY POINTS • A total of 11 CE-CBBCT features were associated with molecular subtypes, some of which also helped to differentiate IHC receptor status. • Tumor size, irregular mass shape, spiculation, and internal enhancement pattern could help identify luminal subtype. • Lesion type, calcification, and ΔHU could be significant indicators of HER-2- enriched versus triple-negative breast cancers.
Collapse
|