1
|
Picchio V, Pontecorvi V, Dhori X, Bordin A, Floris E, Cozzolino C, Frati G, Pagano F, Chimenti I, De Falco E. The emerging role of artificial intelligence applied to exosome analysis: from cancer biology to other biomedical fields. Life Sci 2025; 375:123752. [PMID: 40409585 DOI: 10.1016/j.lfs.2025.123752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2025] [Revised: 05/06/2025] [Accepted: 05/20/2025] [Indexed: 05/25/2025]
Abstract
In recent years, exosomes versatility has prompted their study in the biomedical field for diagnostic, prognostic, and therapeutic applications. Exosomes are bi-lipid small extracellular vesicles (30-150 nm) secreted by various cell types, containing proteins, lipids, and DNA/RNA. They mediate intercellular communication and can influence multiple human physiological and pathological processes. So far, exosome analysis has revealed their role as promising diagnostic tools for human pathologies. Concurrently, artificial intelligence (AI) has revolutionised multiple sectors, including medicine, owing to its ability to analyse large datasets and identify complex patterns. The combination of exosome analysis with AI processing has displayed a novel diagnostic approach for cancer and other diseases. This review explores the current applications and prospects of the combined use of exosomes and AI in medicine. Firstly, we provide a biological overview of exosomes and their relevance in cancer biology. Then we explored exosome isolation techniques and Raman spectroscopy/SERS analysis. Finally, we present a summarised essential guide of AI methods for non-experts, emphasising the advancements made in AI applications for exosome characterisation and profiling in oncology research, as well as in other human diseases.
Collapse
Affiliation(s)
- Vittorio Picchio
- Department of Angio Cardio Neurology, IRCCS Neuromed, 86077 Pozzilli, Italy
| | - Virginia Pontecorvi
- Department of Medical Surgical Sciences and Biotechnologies, Sapienza University, 04100 Latina, Italy
| | - Xhulio Dhori
- CINECA, Super Computing Applications and Innovation Department, 000185 Roma, Italy
| | - Antonella Bordin
- Department of Medical Surgical Sciences and Biotechnologies, Sapienza University, 04100 Latina, Italy
| | - Erica Floris
- Department of Medical Surgical Sciences and Biotechnologies, Sapienza University, 04100 Latina, Italy
| | - Claudia Cozzolino
- Department of Medical Surgical Sciences and Biotechnologies, Sapienza University, 04100 Latina, Italy
| | - Giacomo Frati
- Department of Angio Cardio Neurology, IRCCS Neuromed, 86077 Pozzilli, Italy; Department of Medical Surgical Sciences and Biotechnologies, Sapienza University, 04100 Latina, Italy
| | - Francesca Pagano
- Institute of Biochemistry and Cell Biology, National Council of Research (IBBC-CNR), 00015 Monterotondo,Italy
| | - Isotta Chimenti
- Department of Medical Surgical Sciences and Biotechnologies, Sapienza University, 04100 Latina, Italy; Maria Cecilia Hospital, GVM Care & Research, 48033 Cotignola, Italy.
| | - Elena De Falco
- Department of Medical Surgical Sciences and Biotechnologies, Sapienza University, 04100 Latina, Italy; Maria Cecilia Hospital, GVM Care & Research, 48033 Cotignola, Italy
| |
Collapse
|
2
|
Huang Y, Leotta NJ, Hirsch L, Gullo RL, Hughes M, Reiner J, Saphier NB, Myers KS, Panigrahi B, Ambinder E, Di Carlo P, Grimm LJ, Lowell D, Yoon S, Ghate SV, Parra LC, Sutton EJ. Cross-site Validation of AI Segmentation and Harmonization in Breast MRI. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:1642-1652. [PMID: 39320547 DOI: 10.1007/s10278-024-01266-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 09/05/2024] [Accepted: 09/09/2024] [Indexed: 09/26/2024]
Abstract
This work aims to perform a cross-site validation of automated segmentation for breast cancers in MRI and to compare the performance to radiologists. A three-dimensional (3D) U-Net was trained to segment cancers in dynamic contrast-enhanced axial MRIs using a large dataset from Site 1 (n = 15,266; 449 malignant and 14,817 benign). Performance was validated on site-specific test data from this and two additional sites, and common publicly available testing data. Four radiologists from each of the three clinical sites provided two-dimensional (2D) segmentations as ground truth. Segmentation performance did not differ between the network and radiologists on the test data from Sites 1 and 2 or the common public data (median Dice score Site 1, network 0.86 vs. radiologist 0.85, n = 114; Site 2, 0.91 vs. 0.91, n = 50; common: 0.93 vs. 0.90). For Site 3, an affine input layer was fine-tuned using segmentation labels, resulting in comparable performance between the network and radiologist (0.88 vs. 0.89, n = 42). Radiologist performance differed on the common test data, and the network numerically outperformed 11 of the 12 radiologists (median Dice: 0.85-0.94, n = 20). In conclusion, a deep network with a novel supervised harmonization technique matches radiologists' performance in MRI tumor segmentation across clinical sites. We make code and weights publicly available to promote reproducible AI in radiology.
Collapse
Affiliation(s)
- Yu Huang
- Department of Biomedical Engineering, The City College of the City University of New York, 160 Convent Ave, New York, NY, 10031, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Nicholas J Leotta
- Department of Biomedical Engineering, The City College of the City University of New York, 160 Convent Ave, New York, NY, 10031, USA
| | - Lukas Hirsch
- Department of Biomedical Engineering, The City College of the City University of New York, 160 Convent Ave, New York, NY, 10031, USA
| | - Roberto Lo Gullo
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Mary Hughes
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Jeffrey Reiner
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Nicole B Saphier
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Kelly S Myers
- Department of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, MD, 21224, USA
| | - Babita Panigrahi
- Department of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, MD, 21224, USA
| | - Emily Ambinder
- Department of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, MD, 21224, USA
| | - Philip Di Carlo
- Department of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, MD, 21224, USA
| | - Lars J Grimm
- Department of Radiology, Duke University School of Medicine, Durham, NC, 27710, USA
| | - Dorothy Lowell
- Department of Radiology, Duke University School of Medicine, Durham, NC, 27710, USA
| | - Sora Yoon
- Department of Radiology, Duke University School of Medicine, Durham, NC, 27710, USA
| | - Sujata V Ghate
- Department of Radiology, Duke University School of Medicine, Durham, NC, 27710, USA
| | - Lucas C Parra
- Department of Biomedical Engineering, The City College of the City University of New York, 160 Convent Ave, New York, NY, 10031, USA.
| | - Elizabeth J Sutton
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| |
Collapse
|
3
|
Zhao J, Li L, Wang Y, Huo J, Wang J, Xue H, Cai Y. Identification of gene signatures associated with lactation for predicting prognosis and treatment response in breast cancer patients through machine learning. Sci Rep 2025; 15:13575. [PMID: 40253524 PMCID: PMC12009422 DOI: 10.1038/s41598-025-98255-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Accepted: 04/10/2025] [Indexed: 04/21/2025] Open
Abstract
As a newly discovered histone modification, abnormal lactation has been found to be present in and contribute to the development of various cancers. The aim of this study was to investigate the potential role between lactylation and the prognosis of breast cancer patients. Lactylation-associated subtypes were obtained by unsupervised consensus clustering analysis. Lactylation-related gene signature (LRS) was constructed by 15 machine learning algorithms, and the relationship between LRS and tumor microenvironment (TME) as well as drug sensitivity was analyzed. In addition, the expression of genes in the LRS in different cells was explored by single-cell analysis and spatial transcriptome. The expression levels of genes in LRS in clinical tissues were verified by RT-PCR. Finally, the potential small-molecule compounds were analyzed by CMap, and the molecular docking model of proteins and small-molecule compounds was constructed. LRS was composed of 6 key genes (SHCBP1, SIM2, VGF, GABRQ, SUSD3, and CLIC6). BC patients in the high LRS group had a poorer prognosis and had a TME that promoted tumor progression. Single-cell analysis and spatial transcriptome revealed differential expression of the key genes in different cells. The results of PCR showed that SHCBP1, SIM2, VGF, GABRQ, and SUSD3 were up-regulated in the cancer tissues, whereas CLIC6 was down-regulated in the cancer tissues. Arachidonyltrifluoromethane, AH-6809, W-13, and clofibrate can be used as potential target drugs for SHCBP1, VGF, GABRQ, and SUSD3, respectively. The gene signature we constructed can well predict the prognosis as well as the treatment response of BC patients. In addition, our predicted small-molecule complexes provide an important reference for personalized treatment of breast cancer patients.
Collapse
Affiliation(s)
- Jinfeng Zhao
- College of Physical Education, Shanxi University, Taiyuan, Shanxi, China
| | - Longpeng Li
- College of Physical Education, Shanxi University, Taiyuan, Shanxi, China
| | - Yaxin Wang
- College of Physical Education, Shanxi University, Taiyuan, Shanxi, China
| | - Jiayu Huo
- College of Physical Education, Shanxi University, Taiyuan, Shanxi, China
| | - Jirui Wang
- College of Physical Education, Shanxi University, Taiyuan, Shanxi, China
| | - Huiwen Xue
- College of Physical Education, Shanxi University, Taiyuan, Shanxi, China
| | - Yue Cai
- Department of Anesthesiology, Shanxi Province Cancer Hospital/Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical, Taiyuan, Shanxi, China.
| |
Collapse
|
4
|
Groheux D, Ferrer L, Vargas J, Martineau A, Borgel A, Teixeira L, Menu P, Bertheau P, Gallinato O, Colin T, Lehmann-Che J. FDG-PET/CT and Multimodal Machine Learning Model Prediction of Pathological Complete Response to Neoadjuvant Chemotherapy in Triple-Negative Breast Cancer. Cancers (Basel) 2025; 17:1249. [PMID: 40227836 PMCID: PMC11987901 DOI: 10.3390/cancers17071249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2025] [Revised: 03/18/2025] [Accepted: 04/01/2025] [Indexed: 04/15/2025] Open
Abstract
Purpose: Triple-negative breast cancer (TNBC) is a biologically and clinically heterogeneous disease, associated with poorer outcomes when compared with other subtypes of breast cancer. Neoadjuvant chemotherapy (NAC) is often given before surgery, and achieving a pathological complete response (pCR) has been associated with patient outcomes. There is thus strong clinical interest in the ability to accurately predict pCR status using baseline data. Materials and Methods: A cohort of 57 TNBC patients who underwent FDG-PET/CT before NAC was analyzed to develop a machine learning (ML) algorithm predictive of pCR. A total of 241 predictors were collected for each patient: 11 clinical features, 11 histopathological features, 13 genomic features, and 206 PET features, including 195 radiomic features. The optimization criterion was the area under the ROC curve (AUC). Event-free survival (EFS) was estimated using the Kaplan-Meier method. Results: The best ML algorithm reached an AUC of 0.82. The features with the highest weight in the algorithm were a mix of PET (including radiomics), histopathological, genomic, and clinical features, highlighting the importance of truly multimodal analysis. Patients with predicted pCR tended to have a longer EFS than patients with predicted non-pCR, even though this difference was not significant, probably due to the small sample size and few events observed (p = 0.09). Conclusions: This study suggests that ML applied to baseline multimodal data can help predict pCR status after NAC for TNBC patients and may identify correlations with long-term outcomes. Patients predicted as non-pCR may benefit from concomitant treatment with immunotherapy or dose intensification.
Collapse
Affiliation(s)
- David Groheux
- Department of Nuclear Medicine, AP-HP, Saint-Louis Hospital, F-75010 Paris, France;
- Université Paris Cité, Inserm, Institut de Recherche Saint Louis (IRSL), F-75010 Paris, France; (A.B.); (L.T.); (J.L.-C.)
| | - Loïc Ferrer
- SOPHiA GENETICS, F-33600 Pessac, France; (L.F.); (J.V.); (O.G.); (T.C.)
| | - Jennifer Vargas
- SOPHiA GENETICS, F-33600 Pessac, France; (L.F.); (J.V.); (O.G.); (T.C.)
| | - Antoine Martineau
- Department of Nuclear Medicine, AP-HP, Saint-Louis Hospital, F-75010 Paris, France;
| | - Adrien Borgel
- Université Paris Cité, Inserm, Institut de Recherche Saint Louis (IRSL), F-75010 Paris, France; (A.B.); (L.T.); (J.L.-C.)
- Molecular Oncology Unit, AP-HP, Saint Louis Hospital, F-75010 Paris, France
| | - Luis Teixeira
- Université Paris Cité, Inserm, Institut de Recherche Saint Louis (IRSL), F-75010 Paris, France; (A.B.); (L.T.); (J.L.-C.)
- Breast Diseases Unit, AP-HP, Saint Louis Hospital, F-75010 Paris, France
| | | | - Philippe Bertheau
- Department of Pathology, AP-HP, Saint Louis Hospital, F-75010 Paris, France;
| | - Olivier Gallinato
- SOPHiA GENETICS, F-33600 Pessac, France; (L.F.); (J.V.); (O.G.); (T.C.)
| | - Thierry Colin
- SOPHiA GENETICS, F-33600 Pessac, France; (L.F.); (J.V.); (O.G.); (T.C.)
| | - Jacqueline Lehmann-Che
- Université Paris Cité, Inserm, Institut de Recherche Saint Louis (IRSL), F-75010 Paris, France; (A.B.); (L.T.); (J.L.-C.)
- Molecular Oncology Unit, AP-HP, Saint Louis Hospital, F-75010 Paris, France
| |
Collapse
|
5
|
Sugawara K, Takaya E, Inamori R, Konaka Y, Sato J, Shiratori Y, Hario F, Kobayashi T, Ueda T, Okamoto Y. Breast cancer classification based on breast tissue structures using the Jigsaw puzzle task in self-supervised learning. Radiol Phys Technol 2025; 18:209-218. [PMID: 39760975 PMCID: PMC11876229 DOI: 10.1007/s12194-024-00874-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2024] [Revised: 12/18/2024] [Accepted: 12/20/2024] [Indexed: 01/07/2025]
Abstract
Self-supervised learning (SSL) has gained attention in the medical field as a deep learning approach utilizing unlabeled data. The Jigsaw puzzle task in SSL enables models to learn both features of images and the positional relationships within images. In breast cancer diagnosis, radiologists evaluate not only lesion-specific features but also the surrounding breast structures. However, deep learning models that adopt a diagnostic approach similar to human radiologists are still limited. This study aims to evaluate the effectiveness of the Jigsaw puzzle task in characterizing breast tissue structures for breast cancer classification on mammographic images. Using the Chinese Mammography Database (CMMD), we compared four pre-training pipelines: (1) IN-Jig, pre-trained with both the ImageNet classification task and the Jigsaw puzzle task, (2) Scratch-Jig, pre-trained only with the Jigsaw puzzle task, (3) IN, pre-trained only with the ImageNet classification task, and (4) Scratch, that is trained from random initialization without any pre-training tasks. All pipelines were fine-tuned using binary classification to distinguish between the presence or absence of breast cancer. Performance was evaluated based on the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. Additionally, detailed analysis was conducted for performance across different radiological findings, breast density, and regions of interest were visualized using gradient-weighted class activation mapping (Grad-CAM). The AUC for the four models were 0.925, 0.921, 0.918, 0.909, respectively. Our results suggest the Jigsaw puzzle task is an effective pre-training method for breast cancer classification, with the potential to enhance diagnostic accuracy with limited data.
Collapse
Affiliation(s)
- Keisuke Sugawara
- Department of Diagnostic Radiology, Tohoku University Graduate School of Medicine, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8575, Japan
| | - Eichi Takaya
- Department of Diagnostic Imaging, Tohoku University Graduate School of Medicine, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8575, Japan.
- AI Lab, Tohoku University Hospital, 1-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8575, Japan.
| | - Ryusei Inamori
- Department of Radiological Imaging and Informatics, Tohoku University Graduate School of Medicine, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8575, Japan
| | - Yuma Konaka
- Department of Diagnostic Radiology, Tohoku University Graduate School of Medicine, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8575, Japan
| | - Jumpei Sato
- Department of Diagnostic Radiology, Tohoku University Graduate School of Medicine, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8575, Japan
| | - Yuta Shiratori
- Department of Diagnostic Imaging, Tohoku University Graduate School of Medicine, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8575, Japan
| | - Fumihito Hario
- Department of Diagnostic Imaging, Tohoku University Graduate School of Medicine, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8575, Japan
| | - Tomoya Kobayashi
- Department of Diagnostic Imaging, Tohoku University Graduate School of Medicine, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8575, Japan
- AI Lab, Tohoku University Hospital, 1-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8575, Japan
| | - Takuya Ueda
- Department of Diagnostic Radiology, Tohoku University Graduate School of Medicine, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8575, Japan
| | - Yoshikazu Okamoto
- Department of Diagnostic Imaging, Tohoku University Graduate School of Medicine, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8575, Japan
- AI Lab, Tohoku University Hospital, 1-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi, 980-8575, Japan
| |
Collapse
|
6
|
Rugină AI, Ungureanu A, Giuglea C, Marinescu SA. Artificial Intelligence in Breast Reconstruction: A Narrative Review. MEDICINA (KAUNAS, LITHUANIA) 2025; 61:440. [PMID: 40142251 PMCID: PMC11944005 DOI: 10.3390/medicina61030440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 02/01/2025] [Revised: 02/20/2025] [Accepted: 02/27/2025] [Indexed: 03/28/2025]
Abstract
Breast reconstruction following mastectomy or sectorectomy significantly impacts the quality of life and psychological well-being of breast cancer patients. Since its inception in the 1950s, artificial intelligence (AI) has gradually entered the medical field, promising to transform surgical planning, intraoperative guidance, postoperative care, and medical research. This article examines AI applications in breast reconstruction, supported by recent studies. AI shows promise in enhancing imaging for tumor detection and surgical planning, improving microsurgical precision, predicting complications such as flap failure, and optimizing postoperative monitoring. However, challenges remain, including data quality, safety, algorithm transparency, and clinical integration. Despite these shortcomings, AI has the potential to revolutionize breast reconstruction by improving preoperative planning, surgical precision, operative efficiency, and patient outcomes. This review provides a foundation for further research as AI continues to evolve and clinical trials expand its applications, offering greater benefits to patients and healthcare providers.
Collapse
Affiliation(s)
- Andrei Iulian Rugină
- Department of Plastic and Reconstructive Surgery, “Bagdasar-Arseni” Emergency Hospital, University of Medicine and Pharmacy “Carol Davila”, Blvd. Eroii Sanitari Nr. 8, Sector 5, 050474 Bucharest, Romania; (A.I.R.); (S.A.M.)
| | - Andreea Ungureanu
- Department of Plastic and Reconstructive Surgery, “Bagdasar-Arseni” Emergency Hospital, University of Medicine and Pharmacy “Carol Davila”, Blvd. Eroii Sanitari Nr. 8, Sector 5, 050474 Bucharest, Romania; (A.I.R.); (S.A.M.)
| | - Carmen Giuglea
- Department of Plastic and Reconstructive Surgery, University of Medicine and Pharmacy “Carol Davila”, Blvd. Eroii Sanitari Nr. 8, Sector 5, 050474 Bucharest, Romania;
| | - Silviu Adrian Marinescu
- Department of Plastic and Reconstructive Surgery, “Bagdasar-Arseni” Emergency Hospital, University of Medicine and Pharmacy “Carol Davila”, Blvd. Eroii Sanitari Nr. 8, Sector 5, 050474 Bucharest, Romania; (A.I.R.); (S.A.M.)
- Department of Plastic and Reconstructive Surgery, University of Medicine and Pharmacy “Carol Davila”, Blvd. Eroii Sanitari Nr. 8, Sector 5, 050474 Bucharest, Romania;
| |
Collapse
|
7
|
García-Barberán V, Gómez Del Pulgar ME, Guamán HM, Benito-Martin A. The times they are AI-changing: AI-powered advances in the application of extracellular vesicles to liquid biopsy in breast cancer. EXTRACELLULAR VESICLES AND CIRCULATING NUCLEIC ACIDS 2025; 6:128-140. [PMID: 40206803 PMCID: PMC11977355 DOI: 10.20517/evcna.2024.51] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/18/2024] [Revised: 01/03/2025] [Accepted: 01/25/2025] [Indexed: 04/11/2025]
Abstract
Artificial intelligence (AI) is revolutionizing scientific research by facilitating a paradigm shift in data analysis and discovery. This transformation is characterized by a fundamental change in scientific methods and concepts due to AI's ability to process vast datasets with unprecedented speed and accuracy. In breast cancer research, AI aids in early detection, prognosis, and personalized treatment strategies. Liquid biopsy, a noninvasive tool for detecting circulating tumor traits, could ideally benefit from AI's analytical capabilities, enhancing the detection of minimal residual disease and improving treatment monitoring. Extracellular vesicles (EVs), which are key elements in cell communication and cancer progression, could be analyzed with AI to identify disease-specific biomarkers. AI combined with EV analysis promises an enhancement in diagnosis precision, aiding in early detection and treatment monitoring. Studies show that AI can differentiate cancer types and predict drug efficacy, exemplifying its potential in personalized medicine. Overall, the integration of AI in biomedical research and clinical practice promises significant changes and advancements in diagnostics, personalized medicine-based approaches, and our understanding of complex diseases like cancer.
Collapse
Affiliation(s)
- Vanesa García-Barberán
- Molecular Oncology Laboratory, Medical Oncology Department, Hospital Clínico Universitario San Carlos, Instituto de Investigación Sanitaria San Carlos (IdISSC), Madrid 28040, Spain
| | - María Elena Gómez Del Pulgar
- Molecular Oncology Laboratory, Medical Oncology Department, Hospital Clínico Universitario San Carlos, Instituto de Investigación Sanitaria San Carlos (IdISSC), Madrid 28040, Spain
| | - Heidy M. Guamán
- Molecular Oncology Laboratory, Medical Oncology Department, Hospital Clínico Universitario San Carlos, Instituto de Investigación Sanitaria San Carlos (IdISSC), Madrid 28040, Spain
| | - Alberto Benito-Martin
- Molecular Oncology Laboratory, Medical Oncology Department, Hospital Clínico Universitario San Carlos, Instituto de Investigación Sanitaria San Carlos (IdISSC), Madrid 28040, Spain
- Facultad de Medicina, Universidad Alfonso X el Sabio, Madrid 28691, Spain
| |
Collapse
|
8
|
Dai M, Yan Y, Li Z, Xiao J. Machine-learning models for differentiating benign and malignant breast masses: Integrating automated breast volume scanning intra-tumoral, peri-tumoral features, and clinical information. Digit Health 2025; 11:20552076251332738. [PMID: 40177119 PMCID: PMC11963789 DOI: 10.1177/20552076251332738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2024] [Accepted: 03/03/2025] [Indexed: 04/05/2025] Open
Abstract
Background Differentiating between benign and malignant breast masses is critical for clinical decision-making. Automated breast volume scanning (ABVS) provides high-resolution three-dimensional imaging, addressing the limitations of conventional ultrasound. However, the impact of peritumoral region size on predictive performance has not been systematically studied. This study aims to optimize diagnostic performance by integrating radiomics features and clinical data using multiple machine-learning models. Methods This retrospective study included ABVS images and clinical data from 250 patients with breast masses. Radiomics features were extracted from both intratumoral and peritumoral regions (5, 10, and 20 mm). These features, combined with clinical data, were used to develop models based on four algorithms: Support vector machine, random forest, extreme gradient boosting, and light gradient boosting machine (LGBM). Model performance was evaluated using area under the receiver operating characteristic curve (AUC), calibration curves, and decision curves, with SHapley Additive exPlanations (SHAP) analysis employed for interpretability. Results The inclusion of peritumoral features improved the diagnostic performance to varying degrees, with the model incorporating a 10 mm peritumoral region achieving the highest overall accuracy. Combining radiomics with clinical features further enhanced predictive performance. The LGBM model outperformed the other algorithms across subgroups, achieving a maximum AUC of 0.909, an accuracy of 0.878, and an F1-score of 0.971. SHAP analysis revealed the contribution of key features, improving model interpretability. Conclusion This study demonstrates the value of integrating radiomics and clinical features for breast mass diagnosis, with optimized peritumoral regions enhancing model performance. The LGBM model emerged as the preferred algorithm due to its superior performance. These findings provide strong support for the clinical application of ABVS imaging and future multicenter studies, highlighting the importance of microenvironmental features in diagnosis.
Collapse
Affiliation(s)
- Meixue Dai
- Department of Ultrasound, The Third Xiangya Hospital, Central South University, Changsha, China
| | - Yueqiong Yan
- Department of Ultrasound, The Third Xiangya Hospital, Central South University, Changsha, China
| | - Zhong Li
- Department of Orthodontics, Hunan Xiangya Stomatological Hospital, Central South University, Changsha, China
| | - Jidong Xiao
- Department of Ultrasound, The Third Xiangya Hospital, Central South University, Changsha, China
| |
Collapse
|
9
|
Wang YM, Wang CY, Liu KY, Huang YH, Chen TB, Chiu KN, Liang CY, Lu NH. CNN-Based Cross-Modality Fusion for Enhanced Breast Cancer Detection Using Mammography and Ultrasound. Tomography 2024; 10:2038-2057. [PMID: 39728907 DOI: 10.3390/tomography10120145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2024] [Revised: 11/22/2024] [Accepted: 12/11/2024] [Indexed: 12/28/2024] Open
Abstract
Background/Objectives: Breast cancer is a leading cause of mortality among women in Taiwan and globally. Non-invasive imaging methods, such as mammography and ultrasound, are critical for early detection, yet standalone modalities have limitations in regard to their diagnostic accuracy. This study aims to enhance breast cancer detection through a cross-modality fusion approach combining mammography and ultrasound imaging, using advanced convolutional neural network (CNN) architectures. Materials and Methods: Breast images were sourced from public datasets, including the RSNA, the PAS, and Kaggle, and categorized into malignant and benign groups. Data augmentation techniques were used to address imbalances in the ultrasound dataset. Three models were developed: (1) pre-trained CNNs integrated with machine learning classifiers, (2) transfer learning-based CNNs, and (3) a custom-designed 17-layer CNN for direct classification. The performance of the models was evaluated using metrics such as accuracy and the Kappa score. Results: The custom 17-layer CNN outperformed the other models, achieving an accuracy of 0.964 and a Kappa score of 0.927. The transfer learning model achieved moderate performance (accuracy 0.846, Kappa 0.694), while the pre-trained CNNs with machine learning classifiers yielded the lowest results (accuracy 0.780, Kappa 0.559). Cross-modality fusion proved effective in leveraging the complementary strengths of mammography and ultrasound imaging. Conclusions: This study demonstrates the potential of cross-modality imaging and tailored CNN architectures to significantly improve diagnostic accuracy and reliability in breast cancer detection. The custom-designed model offers a practical solution for early detection, potentially reducing false positives and false negatives, and improving patient outcomes through timely and accurate diagnosis.
Collapse
Affiliation(s)
- Yi-Ming Wang
- Department of Critical Care Medicine, E-DA Hospital, I-Shou University, Kaohsiung City 824005, Taiwan
| | - Chi-Yuan Wang
- Department of Medical Imaging and Radiological Science, I-Shou University, Kaohsiung City 824005, Taiwan
| | - Kuo-Ying Liu
- Department of Radiology, E-DA Cancer Hospital, I-Shou University, Kaohsiung City 824005, Taiwan
| | - Yung-Hui Huang
- Department of Medical Imaging and Radiological Science, I-Shou University, Kaohsiung City 824005, Taiwan
| | - Tai-Been Chen
- Department of Radiological Technology, Faculty of Medical Technology, Teikyo University, Tokyo 173-8605, Japan
| | - Kon-Ning Chiu
- Department of Business Management, National Sun Yat-sen University, Kaohsiung City 804201, Taiwan
- Institute of Medical Science and Technology, National Sun Yat-sen University, Kaohsiung City 804201, Taiwan
| | - Chih-Yu Liang
- Department of Emergency Medicine, E-DA Hospital, I-Shou University, Kaohsiung City 824005, Taiwan
| | - Nan-Han Lu
- Department of Medical Imaging and Radiological Science, I-Shou University, Kaohsiung City 824005, Taiwan
- Department of Radiology, E-DA Cancer Hospital, I-Shou University, Kaohsiung City 824005, Taiwan
- School of Medicine, College of Medicine, I-Shou University, Kaohsiung City 824005, Taiwan
| |
Collapse
|
10
|
Bahl M, Chang JM, Mullen LA, Berg WA. Artificial Intelligence for Breast Ultrasound: AJR Expert Panel Narrative Review. AJR Am J Roentgenol 2024; 223:e2330645. [PMID: 38353449 DOI: 10.2214/ajr.23.30645] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2024]
Abstract
Breast ultrasound is used in a wide variety of clinical scenarios, including both diagnostic and screening applications. Limitations of ultrasound, however, include its low specificity and, for automated breast ultrasound screening, the time necessary to review whole-breast ultrasound images. As of this writing, four AI tools that are approved or cleared by the FDA address these limitations. Current tools, which are intended to provide decision support for lesion classification and/or detection, have been shown to increase specificity among nonspecialists and to decrease interpretation times. Potential future applications include triage of patients with palpable masses in low-resource settings, preoperative prediction of axillary lymph node metastasis, and preoperative prediction of neoadjuvant chemotherapy response. Challenges in the development and clinical deployment of AI for ultrasound include the limited availability of curated training datasets compared with mammography, the high variability in ultrasound image acquisition due to equipment- and operator-related factors (which may limit algorithm generalizability), and the lack of postimplementation evaluation studies. Furthermore, current AI tools for lesion classification were developed based on 2D data, but diagnostic accuracy could potentially be improved if multimodal ultrasound data were used, such as color Doppler, elastography, cine clips, and 3D imaging.
Collapse
Affiliation(s)
- Manisha Bahl
- Department of Radiology, Massachusetts General Hospital, 55 Fruit St, WAC 240, Boston, MA 02114
| | - Jung Min Chang
- Department of Radiology, Seoul National University Hospital, Seoul, Korea
| | - Lisa A Mullen
- Department of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, MD
| | - Wendie A Berg
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, PA
| |
Collapse
|
11
|
Gullo RL, Brunekreef J, Marcus E, Han LK, Eskreis-Winkler S, Thakur SB, Mann R, Lipman KG, Teuwen J, Pinker K. AI Applications to Breast MRI: Today and Tomorrow. J Magn Reson Imaging 2024; 60:2290-2308. [PMID: 38581127 PMCID: PMC11452568 DOI: 10.1002/jmri.29358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 03/07/2024] [Accepted: 03/09/2024] [Indexed: 04/08/2024] Open
Abstract
In breast imaging, there is an unrelenting increase in the demand for breast imaging services, partly explained by continuous expanding imaging indications in breast diagnosis and treatment. As the human workforce providing these services is not growing at the same rate, the implementation of artificial intelligence (AI) in breast imaging has gained significant momentum to maximize workflow efficiency and increase productivity while concurrently improving diagnostic accuracy and patient outcomes. Thus far, the implementation of AI in breast imaging is at the most advanced stage with mammography and digital breast tomosynthesis techniques, followed by ultrasound, whereas the implementation of AI in breast magnetic resonance imaging (MRI) is not moving along as rapidly due to the complexity of MRI examinations and fewer available dataset. Nevertheless, there is persisting interest in AI-enhanced breast MRI applications, even as the use of and indications of breast MRI continue to expand. This review presents an overview of the basic concepts of AI imaging analysis and subsequently reviews the use cases for AI-enhanced MRI interpretation, that is, breast MRI triaging and lesion detection, lesion classification, prediction of treatment response, risk assessment, and image quality. Finally, it provides an outlook on the barriers and facilitators for the adoption of AI in breast MRI. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 6.
Collapse
Affiliation(s)
- Roberto Lo Gullo
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Joren Brunekreef
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Eric Marcus
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Lynn K Han
- Weill Cornell Medical College, New York-Presbyterian Hospital, New York, NY, USA
| | - Sarah Eskreis-Winkler
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Sunitha B Thakur
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Ritse Mann
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Kevin Groot Lipman
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Jonas Teuwen
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Katja Pinker
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| |
Collapse
|
12
|
Islam N, Hasib KM, Mridha MF, Alfarhood S, Safran M, Bhuyan MK. Fusing global context with multiscale context for enhanced breast cancer classification. Sci Rep 2024; 14:27358. [PMID: 39521803 PMCID: PMC11550815 DOI: 10.1038/s41598-024-78363-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 10/30/2024] [Indexed: 11/16/2024] Open
Abstract
Breast cancer is the second most common type of cancer among women. Prompt detection of breast cancer can impede its advancement to more advanced phases, thereby elevating the probability of favorable treatment consequences. Histopathological images are commonly used for breast cancer classification due to their detailed cellular information. Existing diagnostic approaches rely on Convolutional Neural Networks (CNNs) which are limited to local context resulting in a lower classification accuracy. Therefore, we present a fusion model composed of a Vision Transformer (ViT) and custom Atrous Spatial Pyramid Pooling (ASPP) network with an attention mechanism for effectively classifying breast cancer from histopathological images. ViT enables the model to attain global features, while the ASPP network accommodates multiscale features. Fusing the features derived from the models resulted in a robust breast cancer classifier. With the help of five-stage image preprocessing technique, the proposed model achieved 100% accuracy in classifying breast cancer on the BreakHis dataset at 100X and 400X magnification factors. On 40X and 200X magnifications, the model achieved 99.25% and 98.26% classification accuracy respectively. With a commendable classification efficacy on histopathological images, the model can be considered a dependable option for proficient breast cancer classification.
Collapse
Affiliation(s)
- Niful Islam
- Department of Computer Science and Engineering, United International University, Dhaka, 1212, Bangladesh
| | - Khan Md Hasib
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Mirpur, Dhaka, 1216, Bangladesh
| | - M F Mridha
- Department of Computer Science, American International University - Bangladesh, Dhaka, 1229, Bangladesh.
| | - Sultan Alfarhood
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, P.O.Box 51178, Riyadh, 11543, Saudi Arabia
| | - Mejdl Safran
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, P.O.Box 51178, Riyadh, 11543, Saudi Arabia.
| | - M K Bhuyan
- Department of Electronics and Electrical Engineering, Indian Institute of Technology Guwahati, Assam, 781039, India
| |
Collapse
|
13
|
Chen Z, Kim E, Davidsen T, Barnholtz-Sloan JS. Usage of the National Cancer Institute Cancer Research Data Commons by Researchers: A Scoping Review of the Literature. JCO Clin Cancer Inform 2024; 8:e2400116. [PMID: 39536277 PMCID: PMC11575903 DOI: 10.1200/cci.24.00116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 09/06/2024] [Accepted: 09/24/2024] [Indexed: 11/16/2024] Open
Abstract
PURPOSE Over the past decade, significant surges in cancer data of all types have happened. To promote sharing and use of these rich data, the National Cancer Institute's Cancer Research Data Commons (CRDC) was developed as a cloud-based infrastructure that provides a large, comprehensive, and expanding collection of cancer data with tools for analysis. We conducted this scoping review of articles to provide an overview of how CRDC resources are being used by cancer researchers. METHODS A thorough literature search was conducted to identify all relevant publications. We included publications that directly cited CRDC resources to specifically examine the impact and contributions of CRDC by itself. We summarized the distributions and trends of how CRDC components were used by the research community and discussed current research gaps and future opportunities. RESULTS In terms of CRDC resources used by the research community, encouraging trends in utilization were observed, suggesting that CRDC has become an important building block for fostering a wide range of cancer research. We also noted a few areas where current applications are rather lacking and provided insights on how improvements can be made by CRDC and research community. CONCLUSION CRDC, as the foundation of a National Cancer Data Ecosystem, will continue empowering the research community to effectively leverage cancer-related data, uncover novel strategies, and address the needs of patients with cancer, ultimately combatting this disease more effectively.
Collapse
Affiliation(s)
- Zhaoyi Chen
- Informatics and Data Science Program, Center for Biomedical Informatics and Information Technology, National Cancer Institute, Rockville, MD
- Office of Data Science and Strategy, National Institutes of Health, Bethesda, MD
| | - Erika Kim
- Informatics and Data Science Program, Center for Biomedical Informatics and Information Technology, National Cancer Institute, Rockville, MD
| | - Tanja Davidsen
- Informatics and Data Science Program, Center for Biomedical Informatics and Information Technology, National Cancer Institute, Rockville, MD
| | - Jill S. Barnholtz-Sloan
- Informatics and Data Science Program, Center for Biomedical Informatics and Information Technology, National Cancer Institute, Rockville, MD
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, Rockville, MD
| |
Collapse
|
14
|
Arora D, Garg R, Asif F. BCED-Net: Breast Cancer Ensemble Diagnosis Network using transfer learning and the XGBoost classifier with mammography images. Osong Public Health Res Perspect 2024; 15:409-419. [PMID: 39511962 PMCID: PMC11563722 DOI: 10.24171/j.phrp.2023.0361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 02/05/2024] [Accepted: 07/22/2024] [Indexed: 11/15/2024] Open
Abstract
BACKGROUND Breast cancer poses a significant global health challenge, characterized by complex origins and the potential for life-threatening metastasis. The critical need for early and accurate detection is underscored by the 685,000 lives claimed by the disease worldwide in 2020. Deep learning has made strides in advancing the prompt diagnosis of breast cancer. However, obstacles persist, such as dealing with high-dimensional data and the risk of overfitting, necessitating fresh approaches to improve accuracy and real-world applicability. METHODS In response to these challenges, we propose BCED-Net, which stands for Breast Cancer Ensemble Diagnosis Network. This innovative framework leverages transfer learning and the extreme gradient boosting (XGBoost) classifier on the Breast Cancer RSNA dataset. Our methodology involved feature extraction using pre-trained models-namely, Resnet50, EfficientnetB3, VGG19, Densenet121, and ConvNeXtTiny-followed by the concatenation of the extracted features. Our most promising configuration combined features extracted from deep convolutional neural networks-namely Resnet50, EfficientnetB3, and ConvNeXtTiny-that were classified using the XGBoost classifier. RESULTS The ensemble approach demonstrated strong overall performance with an accuracy of 0.89. The precision, recall, and F1-score values, which were all at 0.86, highlight a balanced trade-off between correctly identified positive instances and the ability to capture all actual positive samples. CONCLUSION BCED-Net represents a significant leap forward in addressing persistent issues such as the high dimensionality of features and the risk of overfitting.
Collapse
Affiliation(s)
- Drishti Arora
- Department of Computer Science and Engineering, Amity University, Noida, India
| | - Rakesh Garg
- Department of Computer Science and Engineering, Gurugram University, Gurugram, India
| | - Farhan Asif
- Department of Computer Science and Engineering, Amity University, Noida, India
| |
Collapse
|
15
|
Yuan W, Rao J, Liu Y, Li S, Qin L, Huang X. Deep radiomics-based prognostic prediction of oral cancer using optical coherence tomography. BMC Oral Health 2024; 24:1117. [PMID: 39300434 DOI: 10.1186/s12903-024-04849-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Accepted: 09/02/2024] [Indexed: 09/22/2024] Open
Abstract
BACKGROUND This study aims to evaluate the integration of optical coherence tomography (OCT) and peripheral blood immune indicators for predicting oral cancer prognosis by artificial intelligence. METHODS In this study, we examined patients undergoing radical oral cancer resection and explored inherent relationships among clinical data, OCT images, and peripheral immune indicators for oral cancer prognosis. We firstly built a peripheral blood immune indicator-guided deep learning feature representation method for OCT images, and further integrated a multi-view prognostic radiomics model incorporating feature selection and logistic modeling. Thus, we can assess the prognostic impact of each indicator on oral cancer by quantifying OCT features. RESULTS We collected 289 oral mucosal samples from 68 patients, yielding 1,445 OCT images. Using our deep radiomics-based prognosis model, it achieved excellent discrimination for oral cancer prognosis with the area under the receiver operating characteristic curve (AUC) of 0.886, identifying systemic immune-inflammation index (SII) as the most informative feature for prognosis prediction. Additionally, the deep learning model also performed excellent results with 85.26% accuracy and 0.86 AUC in classifying the SII risk. CONCLUSIONS Our study effectively merged OCT imaging with peripheral blood immune indicators to create a deep learning-based model for inflammatory risk prediction in oral cancer. Additionally, we constructed a comprehensive multi-view radiomics model that utilizes deep learning features for accurate prognosis prediction. The study highlighted the significance of the SII as a crucial indicator for evaluating patient outcomes, corroborating our clinical statistical analyses. This integration underscores the potential of combining imaging and blood indicators in clinical decision-making. TRIAL REGISTRATION The clinical trial associated with this study was prospectively registered in the Chinese Clinical Trial Registry with the trial registration number (TRN) ChiCTR2200064861. The registration was completed on 2021.
Collapse
Affiliation(s)
- Wei Yuan
- Department of Oral and Maxillofacial & Head and Neck Oncology, Beijing Stomatological Hospital, Capital Medical University, Beijing, 100050, China
| | - Jiayi Rao
- Department of Oral and Maxillofacial & Head and Neck Oncology, Beijing Stomatological Hospital, Capital Medical University, Beijing, 100050, China
| | - Yanbin Liu
- Department of Dental Implant Center, Beijing Stomatological Hospital, Capital Medical University, Beijing, 100050, China
| | - Sen Li
- School of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, Guangdong, China
| | - Lizheng Qin
- Department of Oral and Maxillofacial & Head and Neck Oncology, Beijing Stomatological Hospital, Capital Medical University, Beijing, 100050, China.
| | - Xin Huang
- Department of Oral and Maxillofacial & Head and Neck Oncology, Beijing Stomatological Hospital, Capital Medical University, Beijing, 100050, China.
| |
Collapse
|
16
|
Wang R, Huang S, Wang P, Shi X, Li S, Ye Y, Zhang W, Shi L, Zhou X, Tang X. Bibliometric analysis of the application of deep learning in cancer from 2015 to 2023. Cancer Imaging 2024; 24:85. [PMID: 38965599 PMCID: PMC11223420 DOI: 10.1186/s40644-024-00737-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 06/27/2024] [Indexed: 07/06/2024] Open
Abstract
BACKGROUND Recently, the application of deep learning (DL) has made great progress in various fields, especially in cancer research. However, to date, the bibliometric analysis of the application of DL in cancer is scarce. Therefore, this study aimed to explore the research status and hotspots of the application of DL in cancer. METHODS We retrieved all articles on the application of DL in cancer from the Web of Science database Core Collection database. Biblioshiny, VOSviewer and CiteSpace were used to perform the bibliometric analysis through analyzing the numbers, citations, countries, institutions, authors, journals, references, and keywords. RESULTS We found 6,016 original articles on the application of DL in cancer. The number of annual publications and total citations were uptrend in general. China published the greatest number of articles, USA had the highest total citations, and Saudi Arabia had the highest centrality. Chinese Academy of Sciences was the most productive institution. Tian, Jie published the greatest number of articles, while He Kaiming was the most co-cited author. IEEE Access was the most popular journal. The analysis of references and keywords showed that DL was mainly used for the prediction, detection, classification and diagnosis of breast cancer, lung cancer, and skin cancer. CONCLUSIONS Overall, the number of articles on the application of DL in cancer is gradually increasing. In the future, further expanding and improving the application scope and accuracy of DL applications, and integrating DL with protein prediction, genomics and cancer research may be the research trends.
Collapse
Affiliation(s)
- Ruiyu Wang
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Shu Huang
- Department of Gastroenterology, Lianshui County People' Hospital, Huaian, China
- Department of Gastroenterology, Lianshui People' Hospital of Kangda CollegeAffiliated to, Nanjing Medical University , Huaian, China
| | - Ping Wang
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Xiaomin Shi
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Shiqi Li
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Yusong Ye
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Wei Zhang
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Lei Shi
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China
| | - Xian Zhou
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China.
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China.
| | - Xiaowei Tang
- Department of Gastroenterology, The Affiliated Hospital of Southwest Medical University, Street Taiping No.25, Region Jiangyang, Luzhou, Sichuan Province, 646099, China.
- Nuclear Medicine and Molecular Imaging Key Laboratory of Sichuan Province, Luzhou, China.
| |
Collapse
|
17
|
Wang Y, Guo Y, Wang Z, Yu L, Yan Y, Gu Z. Enhancing semantic segmentation in chest X-ray images through image preprocessing: ps-KDE for pixel-wise substitution by kernel density estimation. PLoS One 2024; 19:e0299623. [PMID: 38913621 PMCID: PMC11195943 DOI: 10.1371/journal.pone.0299623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 05/08/2024] [Indexed: 06/26/2024] Open
Abstract
BACKGROUND In medical imaging, the integration of deep-learning-based semantic segmentation algorithms with preprocessing techniques can reduce the need for human annotation and advance disease classification. Among established preprocessing techniques, Contrast Limited Adaptive Histogram Equalization (CLAHE) has demonstrated efficacy in improving segmentation algorithms across various modalities, such as X-rays and CT. However, there remains a demand for improved contrast enhancement methods considering the heterogeneity of datasets and the various contrasts across different anatomic structures. METHOD This study proposes a novel preprocessing technique, ps-KDE, to investigate its impact on deep learning algorithms to segment major organs in posterior-anterior chest X-rays. Ps-KDE augments image contrast by substituting pixel values based on their normalized frequency across all images. We evaluate our approach on a U-Net architecture with ResNet34 backbone pre-trained on ImageNet. Five separate models are trained to segment the heart, left lung, right lung, left clavicle, and right clavicle. RESULTS The model trained to segment the left lung using ps-KDE achieved a Dice score of 0.780 (SD = 0.13), while that of trained on CLAHE achieved a Dice score of 0.717 (SD = 0.19), p<0.01. ps-KDE also appears to be more robust as CLAHE-based models misclassified right lungs in select test images for the left lung model. The algorithm for performing ps-KDE is available at https://github.com/wyc79/ps-KDE. DISCUSSION Our results suggest that ps-KDE offers advantages over current preprocessing techniques when segmenting certain lung regions. This could be beneficial in subsequent analyses such as disease classification and risk stratification.
Collapse
Affiliation(s)
- Yuanchen Wang
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Yujie Guo
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Ziqi Wang
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Linzi Yu
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Yujie Yan
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Zifan Gu
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, United States of America
| |
Collapse
|
18
|
Wang Y, Fu W, Zhang Y, Wang D, Gu Y, Wang W, Xu H, Ge X, Ye C, Fang J, Su L, Wang J, He W, Zhang X, Feng R. Constructing and implementing a performance evaluation indicator set for artificial intelligence decision support systems in pediatric outpatient clinics: an observational study. Sci Rep 2024; 14:14482. [PMID: 38914707 PMCID: PMC11196575 DOI: 10.1038/s41598-024-64893-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 06/13/2024] [Indexed: 06/26/2024] Open
Abstract
Artificial intelligence (AI) decision support systems in pediatric healthcare have a complex application background. As an AI decision support system (AI-DSS) can be costly, once applied, it is crucial to focus on its performance, interpret its success, and then monitor and update it to ensure ongoing success consistently. Therefore, a set of evaluation indicators was explicitly developed for AI-DSS in pediatric healthcare, enabling continuous and systematic performance monitoring. The study unfolded in two stages. The first stage encompassed establishing the evaluation indicator set through a literature review, a focus group interview, and expert consultation using the Delphi method. In the second stage, weight analysis was conducted. Subjective weights were calculated based on expert opinions through analytic hierarchy process, while objective weights were determined using the entropy weight method. Subsequently, subject and object weights were synthesized to form the combined weight. In the two rounds of expert consultation, the authority coefficients were 0.834 and 0.846, Kendall's coordination coefficient was 0.135 in Round 1 and 0.312 in Round 2. The final evaluation indicator set has three first-class indicators, fifteen second-class indicators, and forty-seven third-class indicators. Indicator I-1(Organizational performance) carries the highest weight, followed by Indicator I-2(Societal performance) and Indicator I-3(User experience performance) in the objective and combined weights. Conversely, 'Societal performance' holds the most weight among the subjective weights, followed by 'Organizational performance' and 'User experience performance'. In this study, a comprehensive and specialized set of evaluation indicators for the AI-DSS in the pediatric outpatient clinic was established, and then implemented. Continuous evaluation still requires long-term data collection to optimize the weight proportions of the established indicators.
Collapse
Affiliation(s)
- Yingwen Wang
- Nursing Department, Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Weijia Fu
- Medical Information Center, Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Yuejie Zhang
- School of Computer Science, Fudan University, Shanghai, 200438, China
| | - Daoyang Wang
- School of Public, Health Fudan University, Shanghai, 200032, China
| | - Ying Gu
- Nursing Department, Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Weibing Wang
- School of Public, Health Fudan University, Shanghai, 200032, China
| | - Hong Xu
- Nephrology Department, Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Xiaoling Ge
- Statistical and Data Management Center, Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Chengjie Ye
- Medical Information Center, Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Jinwu Fang
- School of Public, Health Fudan University, Shanghai, 200032, China
| | - Ling Su
- Statistical and Data Management Center, Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Jiayu Wang
- National Health Commission Key Laboratory of Neonatal Diseases (Fudan University), Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Wen He
- Respiratory Department, Children's Hospital of Fudan University, Shanghai, 201102, China
| | - Xiaobo Zhang
- Respiratory Department, Children's Hospital of Fudan University, Shanghai, 201102, China.
| | - Rui Feng
- School of Computer Science, Fudan University, Shanghai, 200438, China.
- School of Computer Science, Fudan University, 2005 Songhu Road, Shanghai, 200438, China.
| |
Collapse
|
19
|
Xu P, Zhao J, Wan M, Song Q, Su Q, Wang D. Classification of multi-feature fusion ultrasound images of breast tumor within category 4 using convolutional neural networks. Med Phys 2024; 51:4243-4257. [PMID: 38436433 DOI: 10.1002/mp.16946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 01/03/2024] [Accepted: 01/09/2024] [Indexed: 03/05/2024] Open
Abstract
BACKGROUND Breast tumor is a fatal threat to the health of women. Ultrasound (US) is a common and economical method for the diagnosis of breast cancer. Breast imaging reporting and data system (BI-RADS) category 4 has the highest false-positive value of about 30% among five categories. The classification task in BI-RADS category 4 is challenging and has not been fully studied. PURPOSE This work aimed to use convolutional neural networks (CNNs) for breast tumor classification using B-mode images in category 4 to overcome the dependence on operator and artifacts. Additionally, this work intends to take full advantage of morphological and textural features in breast tumor US images to improve classification accuracy. METHODS First, original US images coming directly from the hospital were cropped and resized. In 1385 B-mode US BI-RADS category 4 images, the biopsy eliminated 503 samples of benign tumor and left 882 of malignant. Then, K-means clustering algorithm and entropy of sliding windows of US images were conducted. Considering the diversity of different characteristic information of malignant and benign represented by original B-mode images, K-means clustering images and entropy images, they are fused in a three-channel form multi-feature fusion images dataset. The training, validation, and test sets are 969, 277, and 139. With transfer learning, 11 CNN models including DenseNet and ResNet were investigated. Finally, by comparing accuracy, precision, recall, F1-score, and area under curve (AUC) of the results, models which had better performance were selected. The normality of data was assessed by Shapiro-Wilk test. DeLong test and independent t-test were used to evaluate the significant difference of AUC and other values. False discovery rate was utilized to ultimately evaluate the advantages of CNN with highest evaluation metrics. In addition, the study of anti-log compression was conducted but no improvement has shown in CNNs classification results. RESULTS With multi-feature fusion images, DenseNet121 has highest accuracy of 80.22 ± 1.45% compared to other CNNs, precision of 77.97 ± 2.89% and AUC of 0.82 ± 0.01. Multi-feature fusion improved accuracy of DenseNet121 by 1.87% from classification of original B-mode images (p < 0.05). CONCLUSION The CNNs with multi-feature fusion show a good potential of reducing the false-positive rate within category 4. The work illustrated that CNNs and fusion images have the potential to reduce false-positive rate in breast tumor within US BI-RADS category 4, and make the diagnosis of category 4 breast tumors to be more accurate and precise.
Collapse
Affiliation(s)
- Pengfei Xu
- Department of Biomedical Engineering, Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, China
| | - Jing Zhao
- The Second Hospital of Jilin University, Changchun, China
| | - Mingxi Wan
- Department of Biomedical Engineering, Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, China
| | - Qing Song
- The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Qiang Su
- Department of Oncology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Diya Wang
- Department of Biomedical Engineering, Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|
20
|
Gupta P, Basu S, Rana P, Dutta U, Soundararajan R, Kalage D, Chhabra M, Singh S, Yadav TD, Gupta V, Kaman L, Das CK, Gupta P, Saikia UN, Srinivasan R, Sandhu MS, Arora C. Deep-learning enabled ultrasound based detection of gallbladder cancer in northern India: a prospective diagnostic study. THE LANCET REGIONAL HEALTH. SOUTHEAST ASIA 2024; 24:100279. [PMID: 38756152 PMCID: PMC11096661 DOI: 10.1016/j.lansea.2023.100279] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 06/16/2023] [Accepted: 08/30/2023] [Indexed: 05/18/2024]
Abstract
Background Gallbladder cancer (GBC) is highly aggressive. Diagnosis of GBC is challenging as benign gallbladder lesions can have similar imaging features. We aim to develop and validate a deep learning (DL) model for the automatic detection of GBC at abdominal ultrasound (US) and compare its diagnostic performance with that of radiologists. Methods In this prospective study, a multiscale, second-order pooling-based DL classifier model was trained (training and validation cohorts) using the US data of patients with gallbladder lesions acquired between August 2019 and June 2021 at the Postgraduate Institute of Medical Education and research, a tertiary care hospital in North India. The performance of the DL model to detect GBC was evaluated in a temporally independent test cohort (July 2021-September 2022) and was compared with that of two radiologists. Findings The study included 233 patients in the training set (mean age, 48 ± (2SD) 23 years; 142 women), 59 patients in the validation set (mean age, 51.4 ± 19.2 years; 38 women), and 273 patients in the test set (mean age, 50.4 ± 22.1 years; 177 women). In the test set, the DL model had sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) of 92.3% (95% CI, 88.1-95.6), 74.4% (95% CI, 65.3-79.9), and 0.887 (95% CI, 0.844-0.930), respectively for detecting GBC which was comparable to both the radiologists. The DL-based approach showed high sensitivity (89.8-93%) and AUC (0.810-0.890) for detecting GBC in the presence of stones, contracted gallbladders, lesion size <10 mm, and neck lesions, which was comparable to both the radiologists (p = 0.052-0.738 for sensitivity and p = 0.061-0.745 for AUC). The sensitivity for DL-based detection of mural thickening type of GBC was significantly greater than one of the radiologists (87.8% vs. 72.8%, p = 0.012), despite a reduced specificity. Interpretation The DL-based approach demonstrated diagnostic performance comparable to experienced radiologists in detecting GBC using US. However, multicentre studies are warranted to explore the potential of DL-based diagnosis of GBC fully. Funding None.
Collapse
Affiliation(s)
- Pankaj Gupta
- Department of Radiodiagnosis and Imaging, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Soumen Basu
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, 110016, India
| | - Pratyaksha Rana
- Department of Radiodiagnosis and Imaging, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Usha Dutta
- Department of Gastroenterology, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Raghuraman Soundararajan
- Department of Radiodiagnosis and Imaging, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Daneshwari Kalage
- Department of Radiodiagnosis and Imaging, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Manika Chhabra
- Department of Radiodiagnosis and Imaging, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Shravya Singh
- Department of Radiodiagnosis and Imaging, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Thakur Deen Yadav
- Department of Surgical Gastroenterology, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Vikas Gupta
- Department of Surgical Gastroenterology, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Lileswar Kaman
- Department of General Surgery, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Chandan Krushna Das
- Department of Clinical Hematology and Medical Oncology, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Parikshaa Gupta
- Department of Cytology and Gynaecological Pathology, Postgraduate Institute of Medical Education and Research, Chandigarh 160012, India
| | - Uma Nahar Saikia
- Department of Histopathology, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Radhika Srinivasan
- Department of Cytology and Gynaecological Pathology, Postgraduate Institute of Medical Education and Research, Chandigarh 160012, India
| | - Manavjit Singh Sandhu
- Department of Radiodiagnosis and Imaging, Postgraduate Institute of Medical Education and Research, Chandigarh, 160012, India
| | - Chetan Arora
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, 110016, India
| |
Collapse
|
21
|
Sacca L, Lobaina D, Burgoa S, Lotharius K, Moothedan E, Gilmore N, Xie J, Mohler R, Scharf G, Knecht M, Kitsantas P. Promoting Artificial Intelligence for Global Breast Cancer Risk Prediction and Screening in Adult Women: A Scoping Review. J Clin Med 2024; 13:2525. [PMID: 38731054 PMCID: PMC11084581 DOI: 10.3390/jcm13092525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 04/01/2024] [Accepted: 04/23/2024] [Indexed: 05/13/2024] Open
Abstract
Background: Artificial intelligence (AI) algorithms can be applied in breast cancer risk prediction and prevention by using patient history, scans, imaging information, and analysis of specific genes for cancer classification to reduce overdiagnosis and overtreatment. This scoping review aimed to identify the barriers encountered in applying innovative AI techniques and models in developing breast cancer risk prediction scores and promoting screening behaviors among adult females. Findings may inform and guide future global recommendations for AI application in breast cancer prevention and care for female populations. Methods: The PRISMA-SCR (Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews) was used as a reference checklist throughout this study. The Arksey and O'Malley methodology was used as a framework to guide this review. The framework methodology consisted of five steps: (1) Identify research questions; (2) Search for relevant studies; (3) Selection of studies relevant to the research questions; (4) Chart the data; (5) Collate, summarize, and report the results. Results: In the field of breast cancer risk detection and prevention, the following AI techniques and models have been applied: Machine and Deep Learning Model (ML-DL model) (n = 1), Academic Algorithms (n = 2), Breast Cancer Surveillance Consortium (BCSC), Clinical 5-Year Risk Prediction Model (n = 2), deep-learning computer vision AI algorithms (n = 2), AI-based thermal imaging solution (Thermalytix) (n = 1), RealRisks (n = 2), Breast Cancer Risk NAVIgation (n = 1), MammoRisk (ML-Based Tool) (n = 1), Various MLModels (n = 1), and various machine/deep learning, decision aids, and commercial algorithms (n = 7). In the 11 included studies, a total of 39 barriers to AI applications in breast cancer risk prediction and screening efforts were identified. The most common barriers in the application of innovative AI tools for breast cancer prediction and improved screening rates included lack of external validity and limited generalizability (n = 6), as AI was used in studies with either a small sample size or datasets with missing data. Many studies (n = 5) also encountered selection bias due to exclusion of certain populations based on characteristics such as race/ethnicity, family history, or past medical history. Several recommendations for future research should be considered. AI models need to include a broader spectrum and more complete predictive variables for risk assessment. Investigating long-term outcomes with improved follow-up periods is critical to assess the impacts of AI on clinical decisions beyond just the immediate outcomes. Utilizing AI to improve communication strategies at both a local and organizational level can assist in informed decision-making and compliance, especially in populations with limited literacy levels. Conclusions: The use of AI in patient education and as an adjunctive tool for providers is still early in its incorporation, and future research should explore the implementation of AI-driven resources to enhance understanding and decision-making regarding breast cancer screening, especially in vulnerable populations with limited literacy.
Collapse
Affiliation(s)
- Lea Sacca
- Charles E. Schmidt College of Medicine, Florida Atlantic University, Boca Raton, FL 33431, USA; (D.L.); (S.B.); (K.L.); (E.M.); (N.G.); (J.X.); (R.M.); (G.S.); (M.K.); (P.K.)
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
22
|
Amgad N, Haitham H, Alabrak M, Mohammed A. Enhancing Thyroid Cancer Diagnosis through a Resilient Deep Learning Ensemble Approach. 2024 6TH INTERNATIONAL CONFERENCE ON COMPUTING AND INFORMATICS (ICCI) 2024:195-202. [DOI: 10.1109/icci61671.2024.10485147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
Affiliation(s)
- Nadeen Amgad
- MSA University,Faculty of Computer Science,Giza,Egypt
| | - Hadiy Haitham
- MSA University,Faculty of Computer Science,Giza,Egypt
| | | | | |
Collapse
|
23
|
Lokaj B, Pugliese MT, Kinkel K, Lovis C, Schmid J. Barriers and facilitators of artificial intelligence conception and implementation for breast imaging diagnosis in clinical practice: a scoping review. Eur Radiol 2024; 34:2096-2109. [PMID: 37658895 PMCID: PMC10873444 DOI: 10.1007/s00330-023-10181-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 06/07/2023] [Accepted: 07/10/2023] [Indexed: 09/05/2023]
Abstract
OBJECTIVE Although artificial intelligence (AI) has demonstrated promise in enhancing breast cancer diagnosis, the implementation of AI algorithms in clinical practice encounters various barriers. This scoping review aims to identify these barriers and facilitators to highlight key considerations for developing and implementing AI solutions in breast cancer imaging. METHOD A literature search was conducted from 2012 to 2022 in six databases (PubMed, Web of Science, CINHAL, Embase, IEEE, and ArXiv). The articles were included if some barriers and/or facilitators in the conception or implementation of AI in breast clinical imaging were described. We excluded research only focusing on performance, or with data not acquired in a clinical radiology setup and not involving real patients. RESULTS A total of 107 articles were included. We identified six major barriers related to data (B1), black box and trust (B2), algorithms and conception (B3), evaluation and validation (B4), legal, ethical, and economic issues (B5), and education (B6), and five major facilitators covering data (F1), clinical impact (F2), algorithms and conception (F3), evaluation and validation (F4), and education (F5). CONCLUSION This scoping review highlighted the need to carefully design, deploy, and evaluate AI solutions in clinical practice, involving all stakeholders to yield improvement in healthcare. CLINICAL RELEVANCE STATEMENT The identification of barriers and facilitators with suggested solutions can guide and inform future research, and stakeholders to improve the design and implementation of AI for breast cancer detection in clinical practice. KEY POINTS • Six major identified barriers were related to data; black-box and trust; algorithms and conception; evaluation and validation; legal, ethical, and economic issues; and education. • Five major identified facilitators were related to data, clinical impact, algorithms and conception, evaluation and validation, and education. • Coordinated implication of all stakeholders is required to improve breast cancer diagnosis with AI.
Collapse
Affiliation(s)
- Belinda Lokaj
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland.
- Faculty of Medicine, University of Geneva, Geneva, Switzerland.
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland.
| | - Marie-Thérèse Pugliese
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| | - Karen Kinkel
- Réseau Hospitalier Neuchâtelois, Neuchâtel, Switzerland
| | - Christian Lovis
- Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland
| | - Jérôme Schmid
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| |
Collapse
|
24
|
Jabeen K, Khan MA, Hameed MA, Alqahtani O, Alouane MTH, Masood A. A novel fusion framework of deep bottleneck residual convolutional neural network for breast cancer classification from mammogram images. Front Oncol 2024; 14:1347856. [PMID: 38454931 PMCID: PMC10917916 DOI: 10.3389/fonc.2024.1347856] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 02/05/2024] [Indexed: 03/09/2024] Open
Abstract
With over 2.1 million new cases of breast cancer diagnosed annually, the incidence and mortality rate of this disease pose severe global health issues for women. Identifying the disease's influence is the only practical way to lessen it immediately. Numerous research works have developed automated methods using different medical imaging to identify BC. Still, the precision of each strategy differs based on the available resources, the issue's nature, and the dataset being used. We proposed a novel deep bottleneck convolutional neural network with a quantum optimization algorithm for breast cancer classification and diagnosis from mammogram images. Two novel deep architectures named three-residual blocks bottleneck and four-residual blocks bottle have been proposed with parallel and single paths. Bayesian Optimization (BO) has been employed to initialize hyperparameter values and train the architectures on the selected dataset. Deep features are extracted from the global average pool layer of both models. After that, a kernel-based canonical correlation analysis and entropy technique is proposed for the extracted deep features fusion. The fused feature set is further refined using an optimization technique named quantum generalized normal distribution optimization. The selected features are finally classified using several neural network classifiers, such as bi-layered and wide-neural networks. The experimental process was conducted on a publicly available mammogram imaging dataset named INbreast, and a maximum accuracy of 96.5% was obtained. Moreover, for the proposed method, the sensitivity rate is 96.45, the precision rate is 96.5, the F1 score value is 96.64, the MCC value is 92.97%, and the Kappa value is 92.97%, respectively. The proposed architectures are further utilized for the diagnosis process of infected regions. In addition, a detailed comparison has been conducted with a few recent techniques showing the proposed framework's higher accuracy and precision rate.
Collapse
Affiliation(s)
- Kiran Jabeen
- Department of Computer Science, HITEC University, Taxila, Pakistan
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University, Taxila, Pakistan
- Department of Computer Science and Mathematics, Lebanese American University, Beirut, Lebanon
| | - Mohamed Abdel Hameed
- Department of Computer Science, Faculty of Computers and Information, Luxor University, Luxor, Egypt
| | - Omar Alqahtani
- College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | | | - Anum Masood
- Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
25
|
Wang M, Liu Z, Ma L. Application of artificial intelligence in ultrasound imaging for predicting lymph node metastasis in breast cancer: A meta-analysis. Clin Imaging 2024; 106:110048. [PMID: 38065024 DOI: 10.1016/j.clinimag.2023.110048] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 11/22/2023] [Accepted: 11/27/2023] [Indexed: 01/15/2024]
Abstract
BACKGROUND This study aims to comprehensively evaluate the accuracy and effectiveness of ultrasound imaging based on artificial intelligence algorithms in predicting lymph node metastasis in breast cancer patients through a meta-analysis. METHODS We systematically searched PubMed, Embase, and Cochrane Library for literature published up to May 2023. The search terms included artificial intelligence, ultrasound, breast cancer, and lymph node. Studies meeting the inclusion criteria were selected, and data were extracted for analysis. The main evaluation indicators included sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and area under the curve (AUC). The heterogeneity was assessed using the Cochrane Q test combined with the I^2 statistic expressing the percentage of total effect variation that can be attributed to the effect variation between studies, as recommended by the Cochrane Handbook for heterogeneity quantification. A threshold p-value of 0.10 was considered to compensate for the low power of the Q test. Sensitivity analysis was performed to assess the stability of individual studies, and publication bias was determined with funnel plots. Additionally, fagan plots were used to assess clinical utility. RESULTS Ten studies involving 4726 breast cancer patients were included in the meta-analysis. The results showed that ultrasound imaging based on artificial intelligence algorithms had high accuracy and effectiveness in predicting lymph node metastasis in breast cancer patients. The pooled sensitivity was 0.88 (95% CI: 0.81-0.93; P < 0.001; I2 = 84.68), specificity was 0.75 (95% CI: 0.66-0.83; P < 0.001; I2 = 91.11), and AUC was 0.89 (95% CI: 0.86-0.91). The positive likelihood ratio was 3.5 (95% CI: 2.6-4.8), negative likelihood ratio was 0.16 (95% CI: 0.10-0.26), and diagnostic odds ratio was 23 (95% CI: 13-40). However, the combined sensitivity of ultrasound imaging based on non-AI algorithms for predicting lymph node metastasis in breast cancer patients was 0.78 (95%CI: 0.63-0.88), the specificity was 0.76 (95%CI: 0.63-0.86), and the AUC was 0.84 (95%CI: 0.80-0.87). The positive likelihood ratio was 3.3 (95% CI: 1.9-5.6), the negative likelihood ratio was 0.29 (95% CI: 0.15-0.54), and the diagnostic odds ratio was 11 (95% CI: 4-33). Due to limited sample size (n = 2), meta-analysis was not conducted for the outcome of predicting lymph node metastasis burden. CONCLUSION Ultrasound imaging based on artificial intelligence algorithms holds promise in predicting lymph node metastasis in breast cancer patients, demonstrating high accuracy and effectiveness. The application of this technology helps in the diagnosis and treatment decisions for breast cancer patients and is expected to become an important tool in future clinical practice.
Collapse
Affiliation(s)
- Minghui Wang
- Department of Breast Surgery, Affiliate Hospital of Chengde Medical University, Hebei 067000, China
| | - Zihui Liu
- Department of Pathology, Affiliate Hospital of Chengde Medical University, Hebei 067000, China
| | - Lihui Ma
- Department of Breast Surgery, Affiliate Hospital of Chengde Medical University, Hebei 067000, China.
| |
Collapse
|
26
|
Zhang J, Deng J, Huang J, Mei L, Liao N, Yao F, Lei C, Sun S, Zhang Y. Monitoring response to neoadjuvant therapy for breast cancer in all treatment phases using an ultrasound deep learning model. Front Oncol 2024; 14:1255618. [PMID: 38327750 PMCID: PMC10847543 DOI: 10.3389/fonc.2024.1255618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Accepted: 01/08/2024] [Indexed: 02/09/2024] Open
Abstract
Purpose The aim of this study was to investigate the value of a deep learning model (DLM) based on breast tumor ultrasound image segmentation in predicting pathological response to neoadjuvant chemotherapy (NAC) in breast cancer. Methods The dataset contains a total of 1393 ultrasound images of 913 patients from Renmin Hospital of Wuhan University, of which 956 ultrasound images of 856 patients were used as the training set, and 437 ultrasound images of 57 patients underwent NAC were used as the test set. A U-Net-based end-to-end DLM was developed for automatically tumor segmentation and area calculation. The predictive abilities of the DLM, manual segmentation model (MSM), and two traditional ultrasound measurement methods (longest axis model [LAM] and dual-axis model [DAM]) for pathological complete response (pCR) were compared using changes in tumor size ratios to develop receiver operating characteristic curves. Results The average intersection over union value of the DLM was 0.856. The early-stage ultrasound-predicted area under curve (AUC) values of pCR were not significantly different from those of the intermediate and late stages (p< 0.05). The AUCs for MSM, DLM, LAM and DAM were 0.840, 0.756, 0.778 and 0.796, respectively. There was no significant difference in AUC values of the predictive ability of the four models. Conclusion Ultrasonography was predictive of pCR in the early stages of NAC. DLM have a similar predictive value to conventional ultrasound for pCR, with an add benefit in effectively improving workflow.
Collapse
Affiliation(s)
- Jingwen Zhang
- Department of Breast and Thyroid Surgery, Renmin Hospital of Wuhan University, Wuhan, China
| | - Jingwen Deng
- Department of Breast and Thyroid Surgery, Renmin Hospital of Wuhan University, Wuhan, China
| | - Jin Huang
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
| | - Liye Mei
- School of Computer Science, Hubei University of Technology, Wuhan, China
| | - Ni Liao
- Department of Breast and Thyroid Surgery, Renmin Hospital of Wuhan University, Wuhan, China
| | - Feng Yao
- Department of Breast and Thyroid Surgery, Renmin Hospital of Wuhan University, Wuhan, China
| | - Cheng Lei
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
- Suzhou Institute of Wuhan University, Suzhou, China
- Shenzhen Institute of Wuhan University, Shenzhen, China
| | - Shengrong Sun
- Department of Breast and Thyroid Surgery, Renmin Hospital of Wuhan University, Wuhan, China
| | - Yimin Zhang
- Department of Breast and Thyroid Surgery, Renmin Hospital of Wuhan University, Wuhan, China
| |
Collapse
|
27
|
Sadeghi A, Sadeghi M, Fakhar M, Zakariaei Z, Sadeghi M. Scoping Review of Deep Learning Techniques for Diagnosis, Drug Discovery, and Vaccine Development in Leishmaniasis. Transbound Emerg Dis 2024; 2024:6621199. [PMID: 40303156 PMCID: PMC12019899 DOI: 10.1155/2024/6621199] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 10/15/2023] [Accepted: 12/21/2023] [Indexed: 05/02/2025]
Abstract
Leishmania, a single-cell parasite prevalent in tropical and subtropical regions worldwide, can cause varying degrees of leishmaniasis, ranging from self-limiting skin lesions to potentially fatal visceral complications. As such, the parasite has been the subject of much interest in the scientific community. In recent years, advances in diagnostic techniques such as flow cytometry, molecular biology, proteomics, and nanodiagnosis have contributed to progress in the diagnosis of this deadly disease. Additionally, the emergence of artificial intelligence (AI), including its subbranches such as machine learning and deep learning, has revolutionized the field of medicine. The high accuracy of AI and its potential to reduce human and laboratory errors make it an especially promising tool in diagnosis and treatment. Despite the promising potential of deep learning in the medical field, there has been no review study on the applications of this technology in the context of leishmaniasis. To address this gap, we provide a scoping review of deep learning methods in the diagnosis of the disease, drug discovery, and vaccine development. In conducting a thorough search of available literature, we analyzed articles in detail that used deep learning methods for various aspects of the disease, including diagnosis, drug discovery, vaccine development, and related proteins. Each study was individually analyzed, and the methodology and results were presented. As the first and only review study on this topic, this paper serves as a quick and comprehensive resource and guide for the future research in this field.
Collapse
Affiliation(s)
- Alireza Sadeghi
- Intelligent Mobile Robot Lab (IMRL), Department of Mechatronics Engineering, Faculty of New Sciences and Technologies, University of Tehran, Tehran, Iran
| | - Mahdieh Sadeghi
- Student Research Committee, Mazandaran University of Medical Sciences, Sari, Iran
| | - Mahdi Fakhar
- Toxoplasmosis Research Center, Iranian National Registry Center for Lophomoniasis and Toxoplasmosis, Imam Khomeini Hospital, Mazandaran University of Medical Sciences, Sari, Iran
| | - Zakaria Zakariaei
- Toxicology and Forensic Medicine Division, Mazandaran Registry Center for Opioids Poisoning, Antimicrobial Resistance Research Center, Imam Khomeini Hospital, Mazandaran University of Medical Sciences, Sari, Iran
| | | |
Collapse
|
28
|
Saleh A, Zulkifley MA, Harun HH, Gaudreault F, Davison I, Spraggon M. Forest fire surveillance systems: A review of deep learning methods. Heliyon 2024; 10:e23127. [PMID: 38163175 PMCID: PMC10754902 DOI: 10.1016/j.heliyon.2023.e23127] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 11/03/2023] [Accepted: 11/27/2023] [Indexed: 01/03/2024] Open
Abstract
This review aims to critically examine the existing state-of-the-art forest fire detection systems that are based on deep learning methods. In general, forest fire incidences bring significant negative impact to the economy, environment, and society. One of the crucial mitigation actions that needs to be readied is an effective forest fire detection system that are able to automatically notify the relevant parties on the incidence of forest fire as early as possible. This review paper has examined in details 37 research articles that have implemented deep learning (DL) model for forest fire detection, which were published between January 2018 and 2023. In this paper, in depth analysis has been performed to identify the quantity and type of data that includes images and video datasets, as well as data augmentation methods and the deep model architecture. This paper is structured into five subsections, each of which focuses on a specific application of deep learning (DL) in the context of forest fire detection. These subsections include 1) classification, 2) detection, 3) detection and classification, 4) segmentation, and 5) segmentation and classification. To compare the model's performance, the methods were evaluated using comprehensive metrics like accuracy, mean average precision (mAP), F1-Score, mean pixel accuracy (MPA), etc. From the findings, of the usage of DL models for forest fire surveillance systems have yielded favourable outcomes, whereby the majority of studies managed to achieve accuracy rates that exceeds 90%. To further enhance the efficacy of these models, future research can explore the optimal fine-tuning of the hyper-parameters, integrate various satellite data, implement generative data augmentation techniques, and refine the DL model architecture. In conclusion, this paper highlights the potential of deep learning methods in enhancing forest fire detection that is crucial for forest fire management and mitigation.
Collapse
Affiliation(s)
- Azlan Saleh
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia (UKM), 43600 UKM, Bangi, Selangor, Malaysia
| | - Mohd Asyraf Zulkifley
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia (UKM), 43600 UKM, Bangi, Selangor, Malaysia
| | - Hazimah Haspi Harun
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia (UKM), 43600 UKM, Bangi, Selangor, Malaysia
| | - Francis Gaudreault
- Rabdan Academy, 65, Al Inshirah, Al Sa'adah, Abu Dhabi, 22401, PO Box: 114646, United Arab Emirates
| | - Ian Davison
- Rabdan Academy, 65, Al Inshirah, Al Sa'adah, Abu Dhabi, 22401, PO Box: 114646, United Arab Emirates
| | - Martin Spraggon
- Rabdan Academy, 65, Al Inshirah, Al Sa'adah, Abu Dhabi, 22401, PO Box: 114646, United Arab Emirates
| |
Collapse
|
29
|
Shankari N, Kudva V, Hegde RB. Breast Mass Detection and Classification Using Machine Learning Approaches on Two-Dimensional Mammogram: A Review. Crit Rev Biomed Eng 2024; 52:41-60. [PMID: 38780105 DOI: 10.1615/critrevbiomedeng.2024051166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Breast cancer is a leading cause of mortality among women, both in India and globally. The prevalence of breast masses is notably common in women aged 20 to 60. These breast masses are classified, according to the breast imaging-reporting and data systems (BI-RADS) standard, into categories such as fibroadenoma, breast cysts, benign, and malignant masses. To aid in the diagnosis of breast disorders, imaging plays a vital role, with mammography being the most widely used modality for detecting breast abnormalities over the years. However, the process of identifying breast diseases through mammograms can be time-consuming, requiring experienced radiologists to review a significant volume of images. Early detection of breast masses is crucial for effective disease management, ultimately reducing mortality rates. To address this challenge, advancements in image processing techniques, specifically utilizing artificial intelligence (AI) and machine learning (ML), have tiled the way for the development of decision support systems. These systems assist radiologists in the accurate identification and classification of breast disorders. This paper presents a review of various studies where diverse machine learning approaches have been applied to digital mammograms. These approaches aim to identify breast masses and classify them into distinct subclasses such as normal, benign and malignant. Additionally, the paper highlights both the advantages and limitations of existing techniques, offering valuable insights for the benefit of future research endeavors in this critical area of medical imaging and breast health.
Collapse
Affiliation(s)
- N Shankari
- NITTE (Deemed to be University), Department of Electronics and Communication Engineering, NMAM Institute of Technology, Nitte 574110, Karnataka, India
| | - Vidya Kudva
- School of Information Sciences, Manipal Academy of Higher Education, Manipal, India -576104; Nitte Mahalinga Adyanthaya Memorial Institute of Technology, Nitte, India - 574110
| | - Roopa B Hegde
- NITTE (Deemed to be University), Department of Electronics and Communication Engineering, NMAM Institute of Technology, Nitte - 574110, Karnataka, India
| |
Collapse
|
30
|
Irmici G, Cè M, Pepa GD, D'Ascoli E, De Berardinis C, Giambersio E, Rabiolo L, La Rocca L, Carriero S, Depretto C, Scaperrotta G, Cellina M. Exploring the Potential of Artificial Intelligence in Breast Ultrasound. Crit Rev Oncog 2024; 29:15-28. [PMID: 38505878 DOI: 10.1615/critrevoncog.2023048873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2024]
Abstract
Breast ultrasound has emerged as a valuable imaging modality in the detection and characterization of breast lesions, particularly in women with dense breast tissue or contraindications for mammography. Within this framework, artificial intelligence (AI) has garnered significant attention for its potential to improve diagnostic accuracy in breast ultrasound and revolutionize the workflow. This review article aims to comprehensively explore the current state of research and development in harnessing AI's capabilities for breast ultrasound. We delve into various AI techniques, including machine learning, deep learning, as well as their applications in automating lesion detection, segmentation, and classification tasks. Furthermore, the review addresses the challenges and hurdles faced in implementing AI systems in breast ultrasound diagnostics, such as data privacy, interpretability, and regulatory approval. Ethical considerations pertaining to the integration of AI into clinical practice are also discussed, emphasizing the importance of maintaining a patient-centered approach. The integration of AI into breast ultrasound holds great promise for improving diagnostic accuracy, enhancing efficiency, and ultimately advancing patient's care. By examining the current state of research and identifying future opportunities, this review aims to contribute to the understanding and utilization of AI in breast ultrasound and encourage further interdisciplinary collaboration to maximize its potential in clinical practice.
Collapse
Affiliation(s)
- Giovanni Irmici
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Gianmarco Della Pepa
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Elisa D'Ascoli
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Claudia De Berardinis
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Emilia Giambersio
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Lidia Rabiolo
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica Avanzata, Policlinico Università di Palermo, Palermo, Italy
| | - Ludovica La Rocca
- Postgraduation School in Radiodiagnostics, Università degli Studi di Napoli
| | - Serena Carriero
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Catherine Depretto
- Breast Radiology Unit, Fondazione IRCCS, Istituto Nazionale Tumori, Milano, Italy
| | | | - Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121, Milan, Italy
| |
Collapse
|
31
|
van Leeuwen MM, Doyle S, van den Belt-Dusebout AW, van der Mierden S, Loo CE, Mann RM, Teuwen J, Wesseling J. Clinicopathological and prognostic value of calcification morphology descriptors in ductal carcinoma in situ of the breast: a systematic review and meta-analysis. Insights Imaging 2023; 14:213. [PMID: 38051355 DOI: 10.1186/s13244-023-01529-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 09/22/2023] [Indexed: 12/07/2023] Open
Abstract
BACKGROUND Calcifications on mammography can be indicative of breast cancer, but the prognostic value of their appearance remains unclear. This systematic review and meta-analysis aimed to evaluate the association between mammographic calcification morphology descriptors (CMDs) and clinicopathological factors. METHODS A comprehensive literature search in Medline via Ovid, Embase.com, and Web of Science was conducted for articles published between 2000 and January 2022 that assessed the relationship between CMDs and clinicopathological factors, excluding case reports and review articles. The risk of bias and overall quality of evidence were evaluated using the QUIPS tool and GRADE. A random-effects model was used to synthesize the extracted data. This systematic review is reported according to the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA). RESULTS Among the 4715 articles reviewed, 29 met the inclusion criteria, reporting on 17 different clinicopathological factors in relation to CMDs. Heterogeneity between studies was present and the overall risk of bias was high, primarily due to small, inadequately described study populations. Meta-analysis demonstrated significant associations between fine linear calcifications and high-grade DCIS [pooled odds ratio (pOR), 4.92; 95% confidence interval (CI), 2.64-9.17], (comedo)necrosis (pOR, 3.46; 95% CI, 1.29-9.30), (micro)invasion (pOR, 1.53; 95% CI, 1.03-2.27), and a negative association with estrogen receptor positivity (pOR, 0.33; 95% CI, 0.12-0.89). CONCLUSIONS CMDs detected on mammography have prognostic value, but there is a high level of bias and variability between current studies. In order for CMDs to achieve clinical utility, standardization in reporting of CMDs is necessary. CRITICAL RELEVANCE STATEMENT Mammographic calcification morphology descriptors (CMDs) have prognostic value, but in order for CMDs to achieve clinical utility, standardization in reporting of CMDs is necessary. SYSTEMATIC REVIEW REGISTRATION CRD42022341599 KEY POINTS: • Mammographic calcifications can be indicative of breast cancer. • The prognostic value of mammographic calcifications is still unclear. • Specific mammographic calcification morphologies are related to lesion aggressiveness. • Variability between studies necessitates standardization in calcification evaluation to achieve clinical utility.
Collapse
Affiliation(s)
- Merle M van Leeuwen
- Division of Molecular Pathology, Netherlands Cancer Institute - Antoni Van Leeuwenhoek, Amsterdam, the Netherlands
| | - Shannon Doyle
- Division of Radiation Oncology, Netherlands Cancer Institute - Antoni Van Leeuwenhoek, Amsterdam, the Netherlands
| | | | - Stevie van der Mierden
- Scientific Information Services, Netherlands Cancer Institute - Antoni Van Leeuwenhoek, Amsterdam, the Netherlands
| | - Claudette E Loo
- Department of Radiology, Netherlands Cancer Institute - Antoni Van Leeuwenhoek, Amsterdam, the Netherlands
| | - Ritse M Mann
- Department of Radiology, Netherlands Cancer Institute - Antoni Van Leeuwenhoek, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Nijmegen, Nijmegen, the Netherlands
| | - Jonas Teuwen
- Division of Radiation Oncology, Netherlands Cancer Institute - Antoni Van Leeuwenhoek, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Nijmegen, Nijmegen, the Netherlands
| | - Jelle Wesseling
- Division of Molecular Pathology, Netherlands Cancer Institute - Antoni Van Leeuwenhoek, Amsterdam, the Netherlands.
- Department of Pathology, Netherlands Cancer Institute - Antoni van Leeuwenhoek, Amsterdam, the Netherlands.
- Department of Pathology, Leiden University Medical Center, Leiden, the Netherlands.
| |
Collapse
|
32
|
Jiang W, Wu R, Yang T, Yu S, Xing W. Profiling regulatory T lymphocytes within the tumor microenvironment of breast cancer via radiomics. Cancer Med 2023; 12:21861-21872. [PMID: 38083903 PMCID: PMC10757114 DOI: 10.1002/cam4.6757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Revised: 10/13/2023] [Accepted: 11/16/2023] [Indexed: 12/31/2023] Open
Abstract
OBJECTIVE To generate an image-driven biomarker (Rad_score) to predict tumor-infiltrating regulatory T lymphocytes (Treg) in breast cancer (BC). METHODS Overall, 928 BC patients were enrolled from the Cancer Genome Atlas (TCGA) for survival analysis; MRI (n = 71 and n = 30 in the training and validation sets, respectively) from the Cancer Imaging Archive (TCIA) were retrieved and subjected to repeat least absolute shrinkage and selection operator for feature reduction. The radiomic scores (rad_score) for Treg infiltration estimation were calculated via support vector machine (SVM) and logistic regression (LR) algorithms, and validated on the remaining patients. RESULTS Landmark analysis indicated Treg infiltration was a risk factor for BC patients in the first 5 years and after 10 years of diagnosis (p = 0.007 and 0.018, respectively). Altogether, 108 radiomic features were extracted from MRI images, 4 of which remained for model construction. Areas under curves (AUCs) of the SVM model were 0.744 (95% CI 0.622-0.867) and 0.733 (95% CI 0.535-0.931) for training and validation sets, respectively, while for the LR model, AUCs were 0.771 (95% CI 0.657-0.885) and 0.724 (95% CI 0.522-0.926). The calibration curves indicated good agreement between prediction and true value (p > 0.05), and DCA shows the high clinical utility of the radiomic model. Rad_score was significantly correlated with immune inhibitory genes like CTLA4 and PDCD1. CONCLUSIONS High Treg infiltration is a risk factor for patients with BC. The Rad_score formulated on radiomic features is a novel tool to predict Treg abundance in the tumor microenvironment.
Collapse
Affiliation(s)
- Wenying Jiang
- Department of RadiologyThe Third Affiliated Hospital of Soochow UniversityChangzhouChina
- Department of Breast SurgeryThe Third Affiliated Hospital of Soochow UniversityChangzhouChina
| | - Ruoxi Wu
- Department of RadiologyThe Third Affiliated Hospital of Soochow UniversityChangzhouChina
| | - Tao Yang
- Department of Breast SurgeryGansu Provincial Maternity and Child Care HospitalLanzhouChina
| | - Shengnan Yu
- Department of RadiologyThe Third Affiliated Hospital of Soochow UniversityChangzhouChina
| | - Wei Xing
- Department of RadiologyThe Third Affiliated Hospital of Soochow UniversityChangzhouChina
| |
Collapse
|
33
|
McDonald ES, Conant EF. Can AI Reduce the Harms of Screening Mammography? Radiol Artif Intell 2023; 5:e230304. [PMID: 38074781 PMCID: PMC10698594 DOI: 10.1148/ryai.230304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 09/13/2023] [Accepted: 09/27/2023] [Indexed: 10/16/2024]
Affiliation(s)
- Elizabeth S. McDonald
- From the Department of Radiology, Division of Breast Imaging,
Perelman School of Medicine, Hospital of the University of Pennsylvania, 3400
Spruce St, Philadelphia, PA 19104
| | - Emily F. Conant
- From the Department of Radiology, Division of Breast Imaging,
Perelman School of Medicine, Hospital of the University of Pennsylvania, 3400
Spruce St, Philadelphia, PA 19104
| |
Collapse
|
34
|
Wu R, Jia Y, Li N, Lu X, Yao Z, Ma Y, Nie F. Evaluation of Breast Cancer Tumor-Infiltrating Lymphocytes on Ultrasound Images Based on a Novel Multi-Cascade Residual U-Shaped Network. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:2398-2406. [PMID: 37634979 DOI: 10.1016/j.ultrasmedbio.2023.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 07/30/2023] [Accepted: 08/04/2023] [Indexed: 08/29/2023]
Abstract
OBJECTIVE Breast cancer has become the leading cancer of the 21st century. Tumor-infiltrating lymphocytes (TILs) have emerged as effective biomarkers for predicting treatment response and prognosis in breast cancer. The work described here was aimed at designing a novel deep learning network to assess the levels of TILs in breast ultrasound images. METHODS We propose the Multi-Cascade Residual U-Shaped Network (MCRUNet), which incorporates a gray feature enhancement (GFE) module for image reconstruction and normalization to achieve data synergy. Additionally, multiple residual U-shaped (RSU) modules are cascaded as the backbone network to maximize the fusion of global and local features, with a focus on the tumor's location and surrounding regions. The development of MCRUNet is based on data from two hospitals and uses a publicly available ultrasound data set for transfer learning. RESULTS MCRUNet exhibits excellent performance in assessing TILs levels, achieving an area under the receiver operating characteristic curve of 0.8931, an accuracy of 85.71%, a sensitivity of 83.33%, a specificity of 88.64% and an F1 score of 86.54% in the test group. It outperforms six state-of-the-art networks in terms of performance. CONCLUSION The MCRUNet network based on breast ultrasound images of breast cancer patients holds promise for non-invasively predicting TILs levels and aiding personalized treatment decisions.
Collapse
Affiliation(s)
- Ruichao Wu
- School of Information Science and Engineering, Lanzhou University, Lanzhou, China
| | - Yingying Jia
- Ultrasound Medical Center, Lanzhou University Second Hospital, Lanzhou, China; Gansu Province Medical Engineering Research Center for Intelligence Ultrasound, Lanzhou, China; Gansu Province Clinical Research Center for Ultrasonography, Lanzhou, China
| | - Nana Li
- Ultrasound Medical Center, Lanzhou University Second Hospital, Lanzhou, China; Gansu Province Medical Engineering Research Center for Intelligence Ultrasound, Lanzhou, China; Gansu Province Clinical Research Center for Ultrasonography, Lanzhou, China
| | - Xiangyu Lu
- School of Information Science and Engineering, Lanzhou University, Lanzhou, China
| | - Zihuan Yao
- School of Information Science and Engineering, Lanzhou University, Lanzhou, China
| | - Yide Ma
- School of Information Science and Engineering, Lanzhou University, Lanzhou, China.
| | - Fang Nie
- Ultrasound Medical Center, Lanzhou University Second Hospital, Lanzhou, China; Gansu Province Medical Engineering Research Center for Intelligence Ultrasound, Lanzhou, China; Gansu Province Clinical Research Center for Ultrasonography, Lanzhou, China
| |
Collapse
|
35
|
Allen TJ, Henze Bancroft LC, Unal O, Estkowski LD, Cashen TA, Korosec F, Strigel RM, Kelcz F, Fowler AM, Gegios A, Thai J, Lebel RM, Holmes JH. Evaluation of a Deep Learning Reconstruction for High-Quality T2-Weighted Breast Magnetic Resonance Imaging. Tomography 2023; 9:1949-1964. [PMID: 37888744 PMCID: PMC10611328 DOI: 10.3390/tomography9050152] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 10/16/2023] [Accepted: 10/16/2023] [Indexed: 10/28/2023] Open
Abstract
Deep learning (DL) reconstruction techniques to improve MR image quality are becoming commercially available with the hope that they will be applicable to multiple imaging application sites and acquisition protocols. However, before clinical implementation, these methods must be validated for specific use cases. In this work, the quality of standard-of-care (SOC) T2w and a high-spatial-resolution (HR) imaging of the breast were assessed both with and without prototype DL reconstruction. Studies were performed using data collected from phantoms, 20 retrospectively collected SOC patient exams, and 56 prospectively acquired SOC and HR patient exams. Image quality was quantitatively assessed via signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and edge sharpness. Qualitatively, all in vivo images were scored by either two or four radiologist readers using 5-point Likert scales in the following categories: artifacts, perceived sharpness, perceived SNR, and overall quality. Differences in reader scores were tested for significance. Reader preference and perception of signal intensity changes were also assessed. Application of the DL resulted in higher average SNR (1.2-2.8 times), CNR (1.0-1.8 times), and image sharpness (1.2-1.7 times). Qualitatively, the SOC acquisition with DL resulted in significantly improved image quality scores in all categories compared to non-DL images. HR acquisition with DL significantly increased SNR, sharpness, and overall quality compared to both the non-DL SOC and the non-DL HR images. The acquisition time for the HR data only required a 20% increase compared to the SOC acquisition and readers typically preferred DL images over non-DL counterparts. Overall, the DL reconstruction demonstrated improved T2w image quality in clinical breast MRI.
Collapse
Affiliation(s)
- Timothy J. Allen
- Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, WI 53705, USA
| | - Leah C. Henze Bancroft
- Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, WI 53792, USA
| | - Orhan Unal
- Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, WI 53705, USA
- Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, WI 53792, USA
| | | | - Ty A. Cashen
- GE Healthcare, 3000 N Grandview Blvd, Waukesha, WI 53188, USA (R.M.L.)
| | - Frank Korosec
- Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, WI 53792, USA
| | - Roberta M. Strigel
- Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, WI 53705, USA
- Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, WI 53792, USA
- Carbone Cancer Center, University of Wisconsin-Madison, 600 Highland Avenue, Madison, WI 53792, USA
| | - Frederick Kelcz
- Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, WI 53792, USA
| | - Amy M. Fowler
- Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, WI 53705, USA
- Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, WI 53792, USA
- Carbone Cancer Center, University of Wisconsin-Madison, 600 Highland Avenue, Madison, WI 53792, USA
| | - Alison Gegios
- Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, WI 53792, USA
| | - Janice Thai
- Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, WI 53792, USA
| | - R. Marc Lebel
- GE Healthcare, 3000 N Grandview Blvd, Waukesha, WI 53188, USA (R.M.L.)
| | - James H. Holmes
- Department of Radiology, University of Iowa, 169 Newton Road, Iowa City, IA 52242, USA
- Department of Biomedical Engineering, University of Iowa, 3100 Seamans Center, Iowa City, IA 52242, USA
- Holden Comprehensive Cancer Center, University of Iowa, 200 Hawkins Drive, Iowa City, IA 52242, USA
| |
Collapse
|
36
|
da Silva HEC, Santos GNM, Leite AF, Mesquita CRM, Figueiredo PTDS, Stefani CM, de Melo NS. The use of artificial intelligence tools in cancer detection compared to the traditional diagnostic imaging methods: An overview of the systematic reviews. PLoS One 2023; 18:e0292063. [PMID: 37796946 PMCID: PMC10553229 DOI: 10.1371/journal.pone.0292063] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 09/12/2023] [Indexed: 10/07/2023] Open
Abstract
BACKGROUND AND PURPOSE In comparison to conventional medical imaging diagnostic modalities, the aim of this overview article is to analyze the accuracy of the application of Artificial Intelligence (AI) techniques in the identification and diagnosis of malignant tumors in adult patients. DATA SOURCES The acronym PIRDs was used and a comprehensive literature search was conducted on PubMed, Cochrane, Scopus, Web of Science, LILACS, Embase, Scielo, EBSCOhost, and grey literature through Proquest, Google Scholar, and JSTOR for systematic reviews of AI as a diagnostic model and/or detection tool for any cancer type in adult patients, compared to the traditional diagnostic radiographic imaging model. There were no limits on publishing status, publication time, or language. For study selection and risk of bias evaluation, pairs of reviewers worked separately. RESULTS In total, 382 records were retrieved in the databases, 364 after removing duplicates, 32 satisfied the full-text reading criterion, and 09 papers were considered for qualitative synthesis. Although there was heterogeneity in terms of methodological aspects, patient differences, and techniques used, the studies found that several AI approaches are promising in terms of specificity, sensitivity, and diagnostic accuracy in the detection and diagnosis of malignant tumors. When compared to other machine learning algorithms, the Super Vector Machine method performed better in cancer detection and diagnosis. Computer-assisted detection (CAD) has shown promising in terms of aiding cancer detection, when compared to the traditional method of diagnosis. CONCLUSIONS The detection and diagnosis of malignant tumors with the help of AI seems to be feasible and accurate with the use of different technologies, such as CAD systems, deep and machine learning algorithms and radiomic analysis when compared with the traditional model, although these technologies are not capable of to replace the professional radiologist in the analysis of medical images. Although there are limitations regarding the generalization for all types of cancer, these AI tools might aid professionals, serving as an auxiliary and teaching tool, especially for less trained professionals. Therefore, further longitudinal studies with a longer follow-up duration are required for a better understanding of the clinical application of these artificial intelligence systems. TRIAL REGISTRATION Systematic review registration. Prospero registration number: CRD42022307403.
Collapse
Affiliation(s)
| | | | - André Ferreira Leite
- Faculty of Health Science, Dentistry of Department, Brasilia University, Brasilia, Brazil
| | | | | | - Cristine Miron Stefani
- Faculty of Health Science, Dentistry of Department, Brasilia University, Brasilia, Brazil
| | - Nilce Santos de Melo
- Faculty of Health Science, Dentistry of Department, Brasilia University, Brasilia, Brazil
| |
Collapse
|
37
|
Xu Z, Rauch DE, Mohamed RM, Pashapoor S, Zhou Z, Panthi B, Son JB, Hwang KP, Musall BC, Adrada BE, Candelaria RP, Leung JWT, Le-Petross HTC, Lane DL, Perez F, White J, Clayborn A, Reed B, Chen H, Sun J, Wei P, Thompson A, Korkut A, Huo L, Hunt KK, Litton JK, Valero V, Tripathy D, Yang W, Yam C, Ma J. Deep Learning for Fully Automatic Tumor Segmentation on Serially Acquired Dynamic Contrast-Enhanced MRI Images of Triple-Negative Breast Cancer. Cancers (Basel) 2023; 15:4829. [PMID: 37835523 PMCID: PMC10571741 DOI: 10.3390/cancers15194829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 09/10/2023] [Accepted: 09/22/2023] [Indexed: 10/15/2023] Open
Abstract
Accurate tumor segmentation is required for quantitative image analyses, which are increasingly used for evaluation of tumors. We developed a fully automated and high-performance segmentation model of triple-negative breast cancer using a self-configurable deep learning framework and a large set of dynamic contrast-enhanced MRI images acquired serially over the patients' treatment course. Among all models, the top-performing one that was trained with the images across different time points of a treatment course yielded a Dice similarity coefficient of 93% and a sensitivity of 96% on baseline images. The top-performing model also produced accurate tumor size measurements, which is valuable for practical clinical applications.
Collapse
Affiliation(s)
- Zhan Xu
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - David E. Rauch
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Rania M. Mohamed
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Sanaz Pashapoor
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Zijian Zhou
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Bikash Panthi
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Jong Bum Son
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Ken-Pin Hwang
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Benjamin C. Musall
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Beatriz E. Adrada
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Rosalind P. Candelaria
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jessica W. T. Leung
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Huong T. C. Le-Petross
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Deanna L. Lane
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Frances Perez
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jason White
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Alyson Clayborn
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Brandy Reed
- Department of Clinical Research Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Huiqin Chen
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jia Sun
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Peng Wei
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Alastair Thompson
- Section of Breast Surgery, Baylor College of Medicine, Houston, TX 77030, USA
| | - Anil Korkut
- Department of Bioinformatics & Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Lei Huo
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Kelly K. Hunt
- Department of Breast Surgical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jennifer K. Litton
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Vicente Valero
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Debu Tripathy
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Wei Yang
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Clinton Yam
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jingfei Ma
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| |
Collapse
|
38
|
Bouchelouche K, Sathekge MM. Letter from the Editors. Semin Nucl Med 2023; 53:555-557. [PMID: 37451935 DOI: 10.1053/j.semnuclmed.2023.06.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/18/2023]
|
39
|
Obaid AM, Turki A, Bellaaj H, Ksantini M, AlTaee A, Alaerjan A. Detection of Gallbladder Disease Types Using Deep Learning: An Informative Medical Method. Diagnostics (Basel) 2023; 13:1744. [PMID: 37238227 PMCID: PMC10217597 DOI: 10.3390/diagnostics13101744] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 05/09/2023] [Accepted: 05/10/2023] [Indexed: 05/28/2023] Open
Abstract
Nowadays, despite all the conducted research and the provided efforts in advancing the healthcare sector, there is a strong need to rapidly and efficiently diagnose various diseases. The complexity of some disease mechanisms on one side and the dramatic life-saving potential on the other side raise big challenges for the development of tools for the early detection and diagnosis of diseases. Deep learning (DL), an area of artificial intelligence (AI), can be an informative medical tomography method that can aid in the early diagnosis of gallbladder (GB) disease based on ultrasound images (UI). Many researchers considered the classification of only one disease of the GB. In this work, we successfully managed to apply a deep neural network (DNN)-based classification model to a rich built database in order to detect nine diseases at once and to determine the type of disease using UI. In the first step, we built a balanced database composed of 10,692 UI of the GB organ from 1782 patients. These images were carefully collected from three hospitals over roughly three years and then classified by professionals. In the second step, we preprocessed and enhanced the dataset images in order to achieve the segmentation step. Finally, we applied and then compared four DNN models to analyze and classify these images in order to detect nine GB disease types. All the models produced good results in detecting GB diseases; the best was the MobileNet model, with an accuracy of 98.35%.
Collapse
Affiliation(s)
- Ahmed Mahdi Obaid
- CEMLab, National School of Electronics and Telecommunications of Sfax, University of Sfax, Sfax 3029, Tunisia
| | - Amina Turki
- CEMLab, National Engineering School of Sfax, University of Sfax, Sfax 3029, Tunisia; (A.T.); (M.K.)
| | - Hatem Bellaaj
- ReDCAD, National Engineering School of Sfax, University of Sfax, Sfax 3029, Tunisia;
| | - Mohamed Ksantini
- CEMLab, National Engineering School of Sfax, University of Sfax, Sfax 3029, Tunisia; (A.T.); (M.K.)
| | | | - Alaa Alaerjan
- College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia;
| |
Collapse
|
40
|
Cabral BP, Braga LAM, Syed-Abdul S, Mota FB. Future of Artificial Intelligence Applications in Cancer Care: A Global Cross-Sectional Survey of Researchers. Curr Oncol 2023; 30:3432-3446. [PMID: 36975473 PMCID: PMC10047823 DOI: 10.3390/curroncol30030260] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 03/07/2023] [Accepted: 03/11/2023] [Indexed: 03/18/2023] Open
Abstract
Cancer significantly contributes to global mortality, with 9.3 million annual deaths. To alleviate this burden, the utilization of artificial intelligence (AI) applications has been proposed in various domains of oncology. However, the potential applications of AI and the barriers to its widespread adoption remain unclear. This study aimed to address this gap by conducting a cross-sectional, global, web-based survey of over 1000 AI and cancer researchers. The results indicated that most respondents believed AI would positively impact cancer grading and classification, follow-up services, and diagnostic accuracy. Despite these benefits, several limitations were identified, including difficulties incorporating AI into clinical practice and the lack of standardization in cancer health data. These limitations pose significant challenges, particularly regarding testing, validation, certification, and auditing AI algorithms and systems. The results of this study provide valuable insights for informed decision-making for stakeholders involved in AI and cancer research and development, including individual researchers and research funding agencies.
Collapse
Affiliation(s)
| | - Luiza Amara Maciel Braga
- Laboratory of Cellular Communication, Oswaldo Cruz Institute, Oswaldo Cruz Foundation, Rio de Janeiro 21040-360, Brazil
| | - Shabbir Syed-Abdul
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan
- School of Gerontology and Long-Term Care, College of Nursing, Taipei Medical University, Taipei 110, Taiwan
| | - Fabio Batista Mota
- Laboratory of Cellular Communication, Oswaldo Cruz Institute, Oswaldo Cruz Foundation, Rio de Janeiro 21040-360, Brazil
| |
Collapse
|
41
|
Slanetz PJ. The Potential of Deep Learning to Revolutionize Current Breast MRI Practice. Radiology 2023; 306:e222527. [PMID: 36378037 DOI: 10.1148/radiol.222527] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Affiliation(s)
- Priscilla J Slanetz
- From the Division of Breast Imaging, Department of Radiology, Boston University Medical Center, 820 Harrison Ave, FGH-4, Boston, MA 02118; and Boston University Chobanian & Avedisian School of Medicine, Boston, Mass
| |
Collapse
|
42
|
An Efficient USE-Net Deep Learning Model for Cancer Detection. INT J INTELL SYST 2023. [DOI: 10.1155/2023/8509433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
Breast cancer (BrCa) is the most common disease in women worldwide. Classifying the BrCa image is extremely important for finding BrCa at an earlier stage and monitoring BrCa during treatment. The computer-aided detection methods have been used to interpret BrCa and improve the detection of BrCa during the screening and treatment stages. However, if a new BrCa image is generated for the treatment, it will not classify correctly. The main objective of this research is to classify the BrCa images for newly generated images. The model performs preprocessing, segmentation, feature extraction, and classification. In preprocessing, a hybrid median filtering (HMF) is used to eliminate the noise in the images. The contrast of the images is enhanced using quadrant dynamic histogram equalization (QDHE). Then, ROI segmentation is performed using the USE-Net deep learning model. The CaffeNet model is used for feature extraction on the segmented images, and finally, classification is made using the improved random forest (IRF) with extreme gradient boosting (XGB). The model obtained 97.87% accuracy, 98.45% sensitivity, 95.24% specificity, 98.96% precision, and 98.70% f1-score for ultrasound images. The model gives 98.31% accuracy, 99.29% sensitivity, 90.20% specificity, 98.82% precision, and 99.05% f1-score for mammogram images.
Collapse
|
43
|
Windsor GO, Bai H, Lourenco AP, Jiao Z. Application of artificial intelligence in predicting lymph node metastasis in breast cancer. FRONTIERS IN RADIOLOGY 2023; 3:928639. [PMID: 37492388 PMCID: PMC10364981 DOI: 10.3389/fradi.2023.928639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 01/31/2023] [Indexed: 07/27/2023]
Abstract
Breast cancer is a leading cause of death for women globally. A characteristic of breast cancer includes its ability to metastasize to distant regions of the body, and the disease achieves this through first spreading to the axillary lymph nodes. Traditional diagnosis of axillary lymph node metastasis includes an invasive technique that leads to potential clinical complications for breast cancer patients. The rise of artificial intelligence in the medical imaging field has led to the creation of innovative deep learning models that can predict the metastatic status of axillary lymph nodes noninvasively, which would result in no unnecessary biopsies and dissections for patients. In this review, we discuss the success of various deep learning artificial intelligence models across multiple imaging modalities in their performance of predicting axillary lymph node metastasis.
Collapse
Affiliation(s)
- Gabrielle O. Windsor
- Department of Diagnostic Imaging, Brown University, Providence, RI, United States
| | - Harrison Bai
- Department of Radiology and Radiological Sciences, Johns Hopkins University, Baltimore, MD, United States
| | - Ana P. Lourenco
- Department of Diagnostic Imaging, Brown University, Providence, RI, United States
| | - Zhicheng Jiao
- Department of Diagnostic Imaging, Brown University, Providence, RI, United States
| |
Collapse
|
44
|
End-to-End Deep-Learning-Based Diagnosis of Benign and Malignant Orbital Tumors on Computed Tomography Images. J Pers Med 2023; 13:jpm13020204. [PMID: 36836437 PMCID: PMC9960119 DOI: 10.3390/jpm13020204] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 01/16/2023] [Accepted: 01/22/2023] [Indexed: 01/26/2023] Open
Abstract
Determining the nature of orbital tumors is challenging for current imaging interpretation methods, which hinders timely treatment. This study aimed to propose an end-to-end deep learning system to automatically diagnose orbital tumors. A multi-center dataset of 602 non-contrast-enhanced computed tomography (CT) images were prepared. After image annotation and preprocessing, the CT images were used to train and test the deep learning (DL) model for the following two stages: orbital tumor segmentation and classification. The performance on the testing set was compared with the assessment of three ophthalmologists. For tumor segmentation, the model achieved a satisfactory performance, with an average dice similarity coefficient of 0.89. The classification model had an accuracy of 86.96%, a sensitivity of 80.00%, and a specificity of 94.12%. The area under the receiver operating characteristics curve (AUC) of the 10-fold cross-validation ranged from 0.8439 to 0.9546. There was no significant difference on diagnostic performance of the DL-based system and three ophthalmologists (p > 0.05). The proposed end-to-end deep learning system could deliver accurate segmentation and diagnosis of orbital tumors based on noninvasive CT images. Its effectiveness and independence from human interaction allow the potential for tumor screening in the orbit and other parts of the body.
Collapse
|
45
|
Thalakottor LA, Shirwaikar RD, Pothamsetti PT, Mathews LM. Classification of Histopathological Images from Breast Cancer Patients Using Deep Learning: A Comparative Analysis. Crit Rev Biomed Eng 2023; 51:41-62. [PMID: 37581350 DOI: 10.1615/critrevbiomedeng.2023047793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Abstract
Cancer, a leading cause of mortality, is distinguished by the multi-stage conversion of healthy cells into cancer cells. Discovery of the disease early can significantly enhance the possibility of survival. Histology is a procedure where the tissue of interest is first surgically removed from a patient and cut into thin slices. A pathologist will then mount these slices on glass slides, stain them with specialized dyes like hematoxylin and eosin (H&E), and then inspect the slides under a microscope. Unfortunately, a manual analysis of histopathology images during breast cancer biopsy is time consuming. Literature suggests that automated techniques based on deep learning algorithms with artificial intelligence can be used to increase the speed and accuracy of detection of abnormalities within the histopathological specimens obtained from breast cancer patients. This paper highlights some recent work on such algorithms, a comparative study on various deep learning methods is provided. For the present study the breast cancer histopathological database (BreakHis) is used. These images are processed to enhance the inherent features, classified and an evaluation is carried out regarding the accuracy of the algorithm. Three convolutional neural network (CNN) models, visual geometry group (VGG19), densely connected convolutional networks (DenseNet201), and residual neural network (ResNet50V2), were employed while analyzing the images. Of these the DenseNet201 model performed better than other models and attained an accuracy of 91.3%. The paper includes a review of different classification techniques based on machine learning methods including CNN-based models and some of which may replace manual breast cancer diagnosis and detection.
Collapse
Affiliation(s)
- Louie Antony Thalakottor
- Department of Information Science and Engineering, Ramaiah Institute of Technology (RIT), 560054, India
| | - Rudresh Deepak Shirwaikar
- Department of Computer Engineering, Agnel Institute of Technology and Design (AITD), Goa University, Assagao, Goa, India, 403507
| | - Pavan Teja Pothamsetti
- Department of Information Science and Engineering, Ramaiah Institute of Technology (RIT), 560054, India
| | - Lincy Meera Mathews
- Department of Information Science and Engineering, Ramaiah Institute of Technology (RIT), 560054, India
| |
Collapse
|
46
|
Praveen SP, Srinivasu PN, Shafi J, Wozniak M, Ijaz MF. ResNet-32 and FastAI for diagnoses of ductal carcinoma from 2D tissue slides. Sci Rep 2022; 12:20804. [PMID: 36460697 PMCID: PMC9716161 DOI: 10.1038/s41598-022-25089-2] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 11/23/2022] [Indexed: 12/03/2022] Open
Abstract
Carcinoma is a primary source of morbidity in women globally, with metastatic disease accounting for most deaths. Its early discovery and diagnosis may significantly increase the odds of survival. Breast cancer imaging is critical for early identification, clinical staging, management choices, and treatment planning. In the current study, the FastAI technology is used with the ResNet-32 model to precisely identify ductal carcinoma. ResNet-32 is having few layers comparted to majority of its counterparts with almost identical performance. FastAI offers a rapid approximation toward the outcome for deep learning models via GPU acceleration and a faster callback mechanism, which would result in faster execution of the model with lesser code and yield better precision in classifying the tissue slides. Residual Network (ResNet) is proven to handle the vanishing gradient and effective feature learning better. Integration of two computationally efficient technologies has yielded a precision accuracy with reasonable computational efforts. The proposed model has shown considerable efficiency in the evaluating parameters like sensitivity, specificity, accuracy, and F1 Score against the other dominantly used deep learning models. These insights have shown that the proposed approach might assist practitioners in analyzing Breast Cancer (BC) cases appropriately, perhaps saving future complications and death. Clinical and pathological analysis and predictive accuracy have been improved with digital image processing.
Collapse
Affiliation(s)
- S Phani Praveen
- Department of Computer Science and Engineering, Prasad V Potluri Siddhartha Institute of Technology, Vijayawada, 520007, India
| | - Parvathaneni Naga Srinivasu
- Department of Computer Science and Engineering, Prasad V Potluri Siddhartha Institute of Technology, Vijayawada, 520007, India
| | - Jana Shafi
- Department of Computer Science, College of Arts and Science, Prince Sattam bin Abdul Aziz University, Wadi Ad-Dawasir, 11991, Saudi Arabia
| | - Marcin Wozniak
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100, Gliwice, Poland.
| | - Muhammad Fazal Ijaz
- Department of Mechanical Engineering, Faculty of Engineering and Information Technology, The University of Melbourne, Grattam Street, Parkville, VIC, 3010, Australia.
| |
Collapse
|
47
|
Mousser W, Ouadfel S, Taleb-Ahmed A, Kitouni I. IDT: An incremental deep tree framework for biological image classification. Artif Intell Med 2022; 134:102392. [PMID: 36462909 DOI: 10.1016/j.artmed.2022.102392] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 08/10/2022] [Accepted: 08/29/2022] [Indexed: 12/13/2022]
Abstract
Nowadays, breast and cervical cancers are respectively the first and fourth most common causes of cancer death in females. It is believed that, automated systems based on artificial intelligence would allow the early diagnostic which increases significantly the chances of proper treatment and survival. Although Convolutional Neural Networks (CNNs) have achieved human-level performance in object classification tasks, the regular growing of the amount of medical data and the continuous increase of the number of classes make them difficult to learn new tasks without being re-trained from scratch. Nevertheless, fine tuning and transfer learning in deep models are techniques that lead to the well-known catastrophic forgetting problem. In this paper, an Incremental Deep Tree (IDT) framework for biological image classification is proposed to address the catastrophic forgetting of CNNs allowing them to learn new classes while maintaining acceptable accuracies on the previously learnt ones. To evaluate the performance of our approach, the IDT framework is compared against with three popular incremental methods, namely iCaRL, LwF and SupportNet. The experimental results on MNIST dataset achieved 87 % of accuracy and the obtained values on the BreakHis, the LBC and the SIPaKMeD datasets are promising with 92 %, 98 % and 93 % respectively.
Collapse
Affiliation(s)
- Wafa Mousser
- Department of Computer Sciences and Applications, Laboratory of Complex Systems' Modeling and Implementation, Abdelhamid Mehri Constantine 2 University, National Biotechnology Research Center Constantine, Algeria.
| | - Salima Ouadfel
- Department of Computer Sciences and Applications, Abdelhamid Mehri Constantine 2 University, Algeria.
| | - Abdelmalik Taleb-Ahmed
- Institut d'Electronique de Microélectronique et de Nanotechnologie (IEMN), UMR 8520, Université Polytechnique Hauts de France, Université de Lille, CNRS, 59313 Valenciennes, France.
| | - Ilham Kitouni
- LISIA Laboratory "Laboratoire d'Informatique en Science de données et Intelligence Artificielle", "Abdelhamid Mehri Constantine 2 University, Algeria.
| |
Collapse
|
48
|
Chen Y, Jia Y, Zhang X, Bai J, Li X, Ma M, Sun Z, Pei Z. TSHVNet: Simultaneous Nuclear Instance Segmentation and Classification in Histopathological Images Based on Multiattention Mechanisms. BIOMED RESEARCH INTERNATIONAL 2022; 2022:7921922. [PMID: 36457339 PMCID: PMC9708332 DOI: 10.1155/2022/7921922] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 09/30/2022] [Accepted: 10/03/2022] [Indexed: 09/27/2023]
Abstract
Accurate nuclear instance segmentation and classification in histopathologic images are the foundation of cancer diagnosis and prognosis. Several challenges are restricting the development of accurate simultaneous nuclear instance segmentation and classification. Firstly, the visual appearances of different category nuclei could be similar, making it difficult to distinguish different types of nuclei. Secondly, it is thorny to separate highly clustering nuclear instances. Thirdly, rare current studies have considered the global dependencies among diverse nuclear instances. In this article, we propose a novel deep learning framework named TSHVNet which integrates multiattention modules (i.e., Transformer and SimAM) into the state-of-the-art HoVer-Net for the sake of a more accurate nuclear instance segmentation and classification. Specifically, the Transformer attention module is employed on the trunk of the HoVer-Net to model the long-distance relationships of diverse nuclear instances. The SimAM attention modules are deployed on both the trunk and branches to apply the 3D channel and spatial attention to assign neurons with appropriate weights. Finally, we validate the proposed method on two public datasets: PanNuke and CoNSeP. The comparison results have shown the outstanding performance of the proposed TSHVNet network among the state-of-art methods. Particularly, as compared to the original HoVer-Net, the performance of nuclear instance segmentation evaluated by the PQ index has shown 1.4% and 2.8% increases on the CoNSeP and PanNuke datasets, respectively, and the performance of nuclear classification measured by F1_score has increased by 2.4% and 2.5% on the CoNSeP and PanNuke datasets, respectively. Therefore, the proposed multiattention-based TSHVNet is of great potential in simultaneous nuclear instance segmentation and classification.
Collapse
Affiliation(s)
- Yuli Chen
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Yuhang Jia
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Xinxin Zhang
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Jiayang Bai
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Xue Li
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Miao Ma
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Zengguo Sun
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| | - Zhao Pei
- School of Computer Science, Shaanxi Normal University, Xi'an 710119, China
| |
Collapse
|
49
|
Applying Deep Learning for Breast Cancer Detection in Radiology. Curr Oncol 2022; 29:8767-8793. [PMID: 36421343 PMCID: PMC9689782 DOI: 10.3390/curroncol29110690] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 11/12/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022] Open
Abstract
Recent advances in deep learning have enhanced medical imaging research. Breast cancer is the most prevalent cancer among women, and many applications have been developed to improve its early detection. The purpose of this review is to examine how various deep learning methods can be applied to breast cancer screening workflows. We summarize deep learning methods, data availability and different screening methods for breast cancer including mammography, thermography, ultrasound and magnetic resonance imaging. In this review, we will explore deep learning in diagnostic breast imaging and describe the literature review. As a conclusion, we discuss some of the limitations and opportunities of integrating artificial intelligence into breast cancer clinical practice.
Collapse
|
50
|
Bouchelouche K, Sathekge MM. Letter from the Editors. Semin Nucl Med 2022; 52:505-507. [PMID: 35906038 DOI: 10.1053/j.semnuclmed.2022.07.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|