1
|
Chen Y, Liu Y, Wang C, Elliott M, Kwok CF, Peña-Solorzano C, Tian Y, Liu F, Frazer H, McCarthy DJ, Carneiro G. BRAIxDet: Learning to detect malignant breast lesion with incomplete annotations. Med Image Anal 2024; 96:103192. [PMID: 38810516 DOI: 10.1016/j.media.2024.103192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 12/30/2023] [Accepted: 05/02/2024] [Indexed: 05/31/2024]
Abstract
Methods to detect malignant lesions from screening mammograms are usually trained with fully annotated datasets, where images are labelled with the localisation and classification of cancerous lesions. However, real-world screening mammogram datasets commonly have a subset that is fully annotated and another subset that is weakly annotated with just the global classification (i.e., without lesion localisation). Given the large size of such datasets, researchers usually face a dilemma with the weakly annotated subset: to not use it or to fully annotate it. The first option will reduce detection accuracy because it does not use the whole dataset, and the second option is too expensive given that the annotation needs to be done by expert radiologists. In this paper, we propose a middle-ground solution for the dilemma, which is to formulate the training as a weakly- and semi-supervised learning problem that we refer to as malignant breast lesion detection with incomplete annotations. To address this problem, our new method comprises two stages, namely: (1) pre-training a multi-view mammogram classifier with weak supervision from the whole dataset, and (2) extending the trained classifier to become a multi-view detector that is trained with semi-supervised student-teacher learning, where the training set contains fully and weakly-annotated mammograms. We provide extensive detection results on two real-world screening mammogram datasets containing incomplete annotations and show that our proposed approach achieves state-of-the-art results in the detection of malignant breast lesions with incomplete annotations.
Collapse
Affiliation(s)
- Yuanhong Chen
- Australian Institute for Machine Learning, The University of Adelaide, Adelaide, Australia.
| | - Yuyuan Liu
- Australian Institute for Machine Learning, The University of Adelaide, Adelaide, Australia
| | - Chong Wang
- Australian Institute for Machine Learning, The University of Adelaide, Adelaide, Australia.
| | - Michael Elliott
- Bioinformatics and Cellular Genomics, St Vincent's Institute of Medical Research, Melbourne, Australia
| | - Chun Fung Kwok
- Bioinformatics and Cellular Genomics, St Vincent's Institute of Medical Research, Melbourne, Australia
| | - Carlos Peña-Solorzano
- Bioinformatics and Cellular Genomics, St Vincent's Institute of Medical Research, Melbourne, Australia
| | - Yu Tian
- Australian Institute for Machine Learning, The University of Adelaide, Adelaide, Australia
| | - Fengbei Liu
- Australian Institute for Machine Learning, The University of Adelaide, Adelaide, Australia
| | - Helen Frazer
- St Vincent's Hospital Melbourne, Melbourne, Australia
| | - Davis J McCarthy
- Bioinformatics and Cellular Genomics, St Vincent's Institute of Medical Research, Melbourne, Australia; Melbourne Integrative Genomics, The University of Melbourne, Melbourne, Australia
| | - Gustavo Carneiro
- Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, United Kingdom
| |
Collapse
|
2
|
Bhalla D, Rangarajan K, Chandra T, Banerjee S, Arora C. Reproducibility and Explainability of Deep Learning in Mammography: A Systematic Review of Literature. Indian J Radiol Imaging 2024; 34:469-487. [PMID: 38912238 PMCID: PMC11188703 DOI: 10.1055/s-0043-1775737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024] Open
Abstract
Background Although abundant literature is currently available on the use of deep learning for breast cancer detection in mammography, the quality of such literature is widely variable. Purpose To evaluate published literature on breast cancer detection in mammography for reproducibility and to ascertain best practices for model design. Methods The PubMed and Scopus databases were searched to identify records that described the use of deep learning to detect lesions or classify images into cancer or noncancer. A modification of Quality Assessment of Diagnostic Accuracy Studies (mQUADAS-2) tool was developed for this review and was applied to the included studies. Results of reported studies (area under curve [AUC] of receiver operator curve [ROC] curve, sensitivity, specificity) were recorded. Results A total of 12,123 records were screened, of which 107 fit the inclusion criteria. Training and test datasets, key idea behind model architecture, and results were recorded for these studies. Based on mQUADAS-2 assessment, 103 studies had high risk of bias due to nonrepresentative patient selection. Four studies were of adequate quality, of which three trained their own model, and one used a commercial network. Ensemble models were used in two of these. Common strategies used for model training included patch classifiers, image classification networks (ResNet in 67%), and object detection networks (RetinaNet in 67%). The highest reported AUC was 0.927 ± 0.008 on a screening dataset, while it reached 0.945 (0.919-0.968) on an enriched subset. Higher values of AUC (0.955) and specificity (98.5%) were reached when combined radiologist and Artificial Intelligence readings were used than either of them alone. None of the studies provided explainability beyond localization accuracy. None of the studies have studied interaction between AI and radiologist in a real world setting. Conclusion While deep learning holds much promise in mammography interpretation, evaluation in a reproducible clinical setting and explainable networks are the need of the hour.
Collapse
Affiliation(s)
- Deeksha Bhalla
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Krithika Rangarajan
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Tany Chandra
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Subhashis Banerjee
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| | - Chetan Arora
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| |
Collapse
|
3
|
Jiang Z, Gandomkar Z, Trieu PDY, Taba ST, Barron ML, Lewis SJ. AI for interpreting screening mammograms: implications for missed cancer in double reading practices and challenging-to-locate lesions. Sci Rep 2024; 14:11893. [PMID: 38789575 PMCID: PMC11126609 DOI: 10.1038/s41598-024-62324-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 05/15/2024] [Indexed: 05/26/2024] Open
Abstract
Although the value of adding AI as a surrogate second reader in various scenarios has been investigated, it is unknown whether implementing an AI tool within double reading practice would capture additional subtle cancers missed by both radiologists who independently assessed the mammograms. This paper assesses the effectiveness of two state-of-the-art Artificial Intelligence (AI) models in detecting retrospectively-identified missed cancers within a screening program employing double reading practices. The study also explores the agreement between AI and radiologists in locating the lesions, considering various levels of concordance among the radiologists in locating the lesions. The Globally-aware Multiple Instance Classifier (GMIC) and Global-Local Activation Maps (GLAM) models were fine-tuned for our dataset. We evaluated the sensitivity of both models on missed cancers retrospectively identified by a panel of three radiologists who reviewed prior examinations of 729 cancer cases detected in a screening program with double reading practice. Two of these experts annotated the lesions, and based on their concordance levels, cases were categorized as 'almost perfect,' 'substantial,' 'moderate,' and 'poor.' We employed Similarity or Histogram Intersection (SIM) and Kullback-Leibler Divergence (KLD) metrics to compare saliency maps of malignant cases from the AI model with annotations from radiologists in each category. In total, 24.82% of cancers were labeled as "missed." The performance of GMIC and GLAM on the missed cancer cases was 82.98% and 79.79%, respectively, while for the true screen-detected cancers, the performances were 89.54% and 87.25%, respectively (p-values for the difference in sensitivity < 0.05). As anticipated, SIM and KLD from saliency maps were best in 'almost perfect,' followed by 'substantial,' 'moderate,' and 'poor.' Both GMIC and GLAM (p-values < 0.05) exhibited greater sensitivity at higher concordance. Even in a screening program with independent double reading, adding AI could potentially identify missed cancers. However, the challenging-to-locate lesions for radiologists impose a similar challenge for AI.
Collapse
Affiliation(s)
- Zhengqiang Jiang
- Discipline of Medical Imaging Sciences, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia.
| | - Ziba Gandomkar
- Discipline of Medical Imaging Sciences, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Phuong Dung Yun Trieu
- Discipline of Medical Imaging Sciences, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Seyedamir Tavakoli Taba
- Discipline of Medical Imaging Sciences, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Melissa L Barron
- Discipline of Medical Imaging Sciences, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Sarah J Lewis
- Discipline of Medical Imaging Sciences, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
- School of Health Sciences, Western Sydney University, Campbelltown, Australia
| |
Collapse
|
4
|
Zhong Y, Piao Y, Tan B, Liu J. A multi-task fusion model based on a residual-Multi-layer perceptron network for mammographic breast cancer screening. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 247:108101. [PMID: 38432087 DOI: 10.1016/j.cmpb.2024.108101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 01/13/2024] [Accepted: 02/23/2024] [Indexed: 03/05/2024]
Abstract
BACKGROUND AND OBJECTIVE Deep learning approaches are being increasingly applied for medical computer-aided diagnosis (CAD). However, these methods generally target only specific image-processing tasks, such as lesion segmentation or benign state prediction. For the breast cancer screening task, single feature extraction models are generally used, which directly extract only those potential features from the input mammogram that are relevant to the target task. This can lead to the neglect of other important morphological features of the lesion as well as other auxiliary information from the internal breast tissue. To obtain more comprehensive and objective diagnostic results, in this study, we developed a multi-task fusion model that combines multiple specific tasks for CAD of mammograms. METHODS We first trained a set of separate, task-specific models, including a density classification model, a mass segmentation model, and a lesion benignity-malignancy classification model, and then developed a multi-task fusion model that incorporates all of the mammographic features from these different tasks to yield comprehensive and refined prediction results for breast cancer diagnosis. RESULTS The experimental results showed that our proposed multi-task fusion model outperformed other related state-of-the-art models in both breast cancer screening tasks in the publicly available datasets CBIS-DDSM and INbreast, achieving a competitive screening performance with area-under-the-curve scores of 0.92 and 0.95, respectively. CONCLUSIONS Our model not only allows an overall assessment of lesion types in mammography but also provides intermediate results related to radiological features and potential cancer risk factors, indicating its potential to offer comprehensive workflow support to radiologists.
Collapse
Affiliation(s)
- Yutong Zhong
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130022, PR China
| | - Yan Piao
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130022, PR China.
| | - Baolin Tan
- Technology Co. LTD, Shenzhen 518000, PR China
| | - Jingxin Liu
- Department of Radiology, China-Japan Union Hospital, Jilin University, Changchun 130033, PR China
| |
Collapse
|
5
|
Lin CH, Wang HL, Yu LW, Chou PY, Chang HC, Chang CH, Chang PC. Deep learning for the identification of ridge deficiency around dental implants. Clin Implant Dent Relat Res 2024; 26:376-384. [PMID: 38151900 DOI: 10.1111/cid.13301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 11/12/2023] [Accepted: 12/11/2023] [Indexed: 12/29/2023]
Abstract
OBJECTIVES This study aimed to use a deep learning (DL) approach for the automatic identification of the ridge deficiency around dental implants based on an image slice from cone-beam computerized tomography (CBCT). MATERIALS AND METHODS Single slices crossing the central long-axis of 630 mandibular and 845 maxillary virtually placed implants (4-5 mm diameter, 10 mm length) in 412 patients were used. The ridges were classified based on the intraoral bone-implant support and sinus floor location. The slices were either preprocessed by alveolar ridge homogenizing prior to DL (preprocessed) or left unpreprocessed. A convolutional neural network with ResNet-50 architecture was employed for DL. RESULTS The model achieved an accuracy of >98.5% on the unpreprocessed image slices and was found to be superior to the accuracy observed on the preprocessed slices. On the mandible, model accuracy was 98.91 ± 1.45%, and F1 score, a measure of a model's accuracy in binary classification tasks, was lowest (97.30%) on the ridge with a combined horizontal-vertical defect. On the maxilla, model accuracy was 98.82 ± 1.11%, and the ridge presenting an implant collar-sinus floor distance of 5-10 mm with a dehiscence defect had the lowest F1 score (95.86%). To achieve >90% model accuracy, ≥441 mandibular slices or ≥592 maxillary slices were required. CONCLUSIONS The ridge deficiency around dental implants can be identified using DL from CBCT image slices without the need for preprocessed homogenization. The model will be further strengthened by implementing more clinical expertise in dental implant treatment planning and incorporating multiple slices to classify 3-dimensional implant-ridge relationships.
Collapse
Affiliation(s)
- Cheng-Hung Lin
- Department of Electrical Engineering, College of Technology and Engineering, National Taiwan Normal University, Taipei, Taiwan
| | - Hom-Lay Wang
- Department of Periodontics and Oral Medicine, School of Dentistry, University of Michigan, Ann Arbor, Michigan, USA
| | - Li-Wen Yu
- Graduate Institute of Clinical Dentistry, School of Dentistry, College of Medicine, National Taiwan University, Taipei, Taiwan
- Division of Periodontics, Department of Dentistry, National Taiwan University Hospital, Taipei, Taiwan
| | - Po-Yung Chou
- Department of Electrical Engineering, College of Technology and Engineering, National Taiwan Normal University, Taipei, Taiwan
| | - Hao-Chieh Chang
- Graduate Institute of Clinical Dentistry, School of Dentistry, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Chin-Hao Chang
- Department of Medical Research, National Taiwan University Hospital, Taipei, Taiwan
| | - Po-Chun Chang
- Graduate Institute of Clinical Dentistry, School of Dentistry, College of Medicine, National Taiwan University, Taipei, Taiwan
- Division of Periodontics, Department of Dentistry, National Taiwan University Hospital, Taipei, Taiwan
- School of Dentistry, College of Dental Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
| |
Collapse
|
6
|
Oza P, Oza U, Oza R, Sharma P, Patel S, Kumar P, Gohel B. Digital mammography dataset for breast cancer diagnosis research (DMID) with breast mass segmentation analysis. Biomed Eng Lett 2024; 14:317-330. [PMID: 38374902 PMCID: PMC10874363 DOI: 10.1007/s13534-023-00339-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 11/08/2023] [Accepted: 11/28/2023] [Indexed: 02/21/2024] Open
Abstract
Purpose:In the last two decades, computer-aided detection and diagnosis (CAD) systems have been created to help radiologists discover and diagnose lesions observed on breast imaging tests. These systems can serve as a second opinion tool for the radiologist. However, developing algorithms for identifying and diagnosing breast lesions relies heavily on mammographic datasets. Many existing databases do not consider all the needs necessary for research and study, such as mammographic masks, radiology reports, breast composition, etc. This paper aims to introduce and describe a new mammographic database. Methods:The proposed dataset comprises mammograms with several lesions, such as masses, calcifications, architectural distortions, and asymmetries. In addition, a radiologist report is provided, describing the details of the breast, such as breast density, description of abnormality present, condition of the skin, nipple and pectoral muscles, etc., for each mammogram. Results:We present results of commonly used segmentation framework trained on our proposed dataset. We used information regarding the class of abnormalities (benign or malignant) and breast tissue density provided with each mammogram to analyze the segmentation model's performance concerning these parameters. Conclusion:The presented dataset provides diverse mammogram images to develop and train models for breast cancer diagnosis applications.
Collapse
Affiliation(s)
| | - Urvi Oza
- Dhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, India
| | - Rajiv Oza
- Rad Imaging, X-Ray and Sonography Clinic, Ahmedabad, India
| | - Paawan Sharma
- Pandit Deendayal Energy University, Gandhinagar, India
| | - Samir Patel
- Pandit Deendayal Energy University, Gandhinagar, India
| | | | - Bakul Gohel
- Dhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, India
| |
Collapse
|
7
|
Wang L. Mammography with deep learning for breast cancer detection. Front Oncol 2024; 14:1281922. [PMID: 38410114 PMCID: PMC10894909 DOI: 10.3389/fonc.2024.1281922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/19/2024] [Indexed: 02/28/2024] Open
Abstract
X-ray mammography is currently considered the golden standard method for breast cancer screening, however, it has limitations in terms of sensitivity and specificity. With the rapid advancements in deep learning techniques, it is possible to customize mammography for each patient, providing more accurate information for risk assessment, prognosis, and treatment planning. This paper aims to study the recent achievements of deep learning-based mammography for breast cancer detection and classification. This review paper highlights the potential of deep learning-assisted X-ray mammography in improving the accuracy of breast cancer screening. While the potential benefits are clear, it is essential to address the challenges associated with implementing this technology in clinical settings. Future research should focus on refining deep learning algorithms, ensuring data privacy, improving model interpretability, and establishing generalizability to successfully integrate deep learning-assisted mammography into routine breast cancer screening programs. It is hoped that the research findings will assist investigators, engineers, and clinicians in developing more effective breast imaging tools that provide accurate diagnosis, sensitivity, and specificity for breast cancer.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| |
Collapse
|
8
|
Cobanaj M, Corti C, Dee EC, McCullum L, Boldrini L, Schlam I, Tolaney SM, Celi LA, Curigliano G, Criscitiello C. Advancing equitable and personalized cancer care: Novel applications and priorities of artificial intelligence for fairness and inclusivity in the patient care workflow. Eur J Cancer 2024; 198:113504. [PMID: 38141549 DOI: 10.1016/j.ejca.2023.113504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 12/13/2023] [Indexed: 12/25/2023]
Abstract
Patient care workflows are highly multimodal and intertwined: the intersection of data outputs provided from different disciplines and in different formats remains one of the main challenges of modern oncology. Artificial Intelligence (AI) has the potential to revolutionize the current clinical practice of oncology owing to advancements in digitalization, database expansion, computational technologies, and algorithmic innovations that facilitate discernment of complex relationships in multimodal data. Within oncology, radiation therapy (RT) represents an increasingly complex working procedure, involving many labor-intensive and operator-dependent tasks. In this context, AI has gained momentum as a powerful tool to standardize treatment performance and reduce inter-observer variability in a time-efficient manner. This review explores the hurdles associated with the development, implementation, and maintenance of AI platforms and highlights current measures in place to address them. In examining AI's role in oncology workflows, we underscore that a thorough and critical consideration of these challenges is the only way to ensure equitable and unbiased care delivery, ultimately serving patients' survival and quality of life.
Collapse
Affiliation(s)
- Marisa Cobanaj
- National Center for Radiation Research in Oncology, OncoRay, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany
| | - Chiara Corti
- Breast Oncology Program, Dana-Farber Brigham Cancer Center, Boston, MA, USA; Harvard Medical School, Boston, MA, USA; Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy.
| | - Edward C Dee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Lucas McCullum
- Department of Radiation Oncology, MD Anderson Cancer Center, Houston, TX, USA
| | - Laura Boldrini
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy
| | - Ilana Schlam
- Department of Hematology and Oncology, Tufts Medical Center, Boston, MA, USA; Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Sara M Tolaney
- Breast Oncology Program, Dana-Farber Brigham Cancer Center, Boston, MA, USA; Harvard Medical School, Boston, MA, USA; Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Leo A Celi
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA; Laboratory for Computational Physiology, Massachusetts Institute of Technology, Cambridge, MA, USA; Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Giuseppe Curigliano
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy
| | - Carmen Criscitiello
- Division of New Drugs and Early Drug Development for Innovative Therapies, European Institute of Oncology, IRCCS, Milan, Italy; Department of Oncology and Hematology-Oncology (DIPO), University of Milan, Milan, Italy
| |
Collapse
|
9
|
Jiang Z, Gandomkar Z, Trieu PD(Y, Tavakoli Taba S, Barron ML, Obeidy P, Lewis SJ. Evaluating Recalibrating AI Models for Breast Cancer Diagnosis in a New Context: Insights from Transfer Learning, Image Enhancement and High-Quality Training Data Integration. Cancers (Basel) 2024; 16:322. [PMID: 38254813 PMCID: PMC10814142 DOI: 10.3390/cancers16020322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 01/07/2024] [Accepted: 01/08/2024] [Indexed: 01/24/2024] Open
Abstract
This paper investigates the adaptability of four state-of-the-art artificial intelligence (AI) models to the Australian mammographic context through transfer learning, explores the impact of image enhancement on model performance and analyses the relationship between AI outputs and histopathological features for clinical relevance and accuracy assessment. A total of 1712 screening mammograms (n = 856 cancer cases and n = 856 matched normal cases) were used in this study. The 856 cases with cancer lesions were annotated by two expert radiologists and the level of concordance between their annotations was used to establish two sets: a 'high-concordances subset' with 99% agreement of cancer location and an 'entire dataset' with all cases included. The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance of Globally aware Multiple Instance Classifier (GMIC), Global-Local Activation Maps (GLAM), I&H and End2End AI models, both in the pretrained and transfer learning modes, with and without applying the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm. The four AI models with and without transfer learning in the high-concordance subset outperformed those in the entire dataset. Applying the CLAHE algorithm to mammograms improved the performance of the AI models. In the high-concordance subset with the transfer learning and CLAHE algorithm applied, the AUC of the GMIC model was highest (0.912), followed by the GLAM model (0.909), I&H (0.893) and End2End (0.875). There were significant differences (p < 0.05) in the performances of the four AI models between the high-concordance subset and the entire dataset. The AI models demonstrated significant differences in malignancy probability concerning different tumour size categories in mammograms. The performance of AI models was affected by several factors such as concordance classification, image enhancement and transfer learning. Mammograms with a strong concordance with radiologists' annotations, applying image enhancement and transfer learning could enhance the accuracy of AI models.
Collapse
Affiliation(s)
- Zhengqiang Jiang
- Discipline of Medical Imaging Science, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney 2006, Australia; (Z.G.); (P.D.T.); (S.T.T.); (M.L.B.); (P.O.)
| | - Ziba Gandomkar
- Discipline of Medical Imaging Science, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney 2006, Australia; (Z.G.); (P.D.T.); (S.T.T.); (M.L.B.); (P.O.)
| | - Phuong Dung (Yun) Trieu
- Discipline of Medical Imaging Science, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney 2006, Australia; (Z.G.); (P.D.T.); (S.T.T.); (M.L.B.); (P.O.)
| | - Seyedamir Tavakoli Taba
- Discipline of Medical Imaging Science, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney 2006, Australia; (Z.G.); (P.D.T.); (S.T.T.); (M.L.B.); (P.O.)
| | - Melissa L. Barron
- Discipline of Medical Imaging Science, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney 2006, Australia; (Z.G.); (P.D.T.); (S.T.T.); (M.L.B.); (P.O.)
| | - Peyman Obeidy
- Discipline of Medical Imaging Science, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney 2006, Australia; (Z.G.); (P.D.T.); (S.T.T.); (M.L.B.); (P.O.)
| | - Sarah J. Lewis
- Discipline of Medical Imaging Science, School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney 2006, Australia; (Z.G.); (P.D.T.); (S.T.T.); (M.L.B.); (P.O.)
- School of Health Sciences, Western Sydney University, Campbelltown 2560, Australia
| |
Collapse
|
10
|
R K, H HM, S M, Venkatraman R, Patil S. Hardware deployment of deep learning model for classification of breast carcinoma from digital mammogram images. Med Biol Eng Comput 2023; 61:2843-2857. [PMID: 37495885 DOI: 10.1007/s11517-023-02883-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 07/03/2023] [Indexed: 07/28/2023]
Abstract
Cancer is an illness that instils fear in many individuals throughout the world due to its lethal nature. However, in most situations, cancer may be cured if detected early and treated properly. Computer-aided diagnosis is gaining traction because it may be used as an initial screening test for many illnesses, including cancer. Deep learning (DL) is a CAD-based artificial intelligence (AI) powered approach which attempts to mimic the cognitive process of the human brain. Various DL algorithms have been applied for breast cancer diagnosis and have obtained adequate accuracy due to the DL technology's high feature learning capabilities. However, when it comes to real-time application, deep neural networks (NN) have a high computational complexity in terms of power, speed, and resource usage. With this in mind, this work proposes a miniaturised NN to reduce the number of parameters and computational complexity for hardware deployment. The quantised NN is then accelerated using field-programmable gate arrays (FPGAs) to increase detection speed and minimise power consumption while guaranteeing high accuracy, thus providing a new avenue in assisting radiologists in breast cancer diagnosis using digital mammograms. When evaluated on benchmark datasets such as DDSM, MIAS, and INbreast, the suggested method achieves high classification rates. The proposed model achieved an accuracy of 99.38% on the combined dataset.
Collapse
Affiliation(s)
- Kayalvizhi R
- Department of Electronics and Communication, SRM Institute of Science and Technology, Kattankulathur, Chennai, 603203, India
| | - Heartlin Maria H
- Department of Electronics and Communication, SRM Institute of Science and Technology, Kattankulathur, Chennai, 603203, India
| | - Malarvizhi S
- Department of Electronics and Communication, SRM Institute of Science and Technology, Kattankulathur, Chennai, 603203, India.
| | - Revathi Venkatraman
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai, 603203, India
| | - Shantanu Patil
- Department of Translational Medicine and Research, SRM Medical College Hospital and Research Centre, Kattankulathur, Chennai, 60320, India
| |
Collapse
|
11
|
Balaji K. Image Augmentation based on Variational Autoencoder for Breast Tumor Segmentation. Acad Radiol 2023; 30 Suppl 2:S172-S183. [PMID: 36804294 DOI: 10.1016/j.acra.2022.12.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 12/18/2022] [Accepted: 12/21/2022] [Indexed: 02/18/2023]
Abstract
RATIONALE AND OBJECTIVES Breast tumor segmentation based on Dynamic Contrast-Enhanced Magnetic Resonance Imaging is significant step for computable radiomics analysis of breast cancer. Manual tumor annotation is time-consuming process and involves medical acquaintance, biased, inclined to error, and inter-user discrepancy. A number of modern trainings have revealed the capability of deep learning representations in image segmentation. MATERIALS AND METHODS Here, we describe a 3D Connected-UNets for tumor segmentation from 3D Magnetic Resonance Imagings based on encoder-decoder architecture. Due to a restricted training dataset size, a variational auto-encoder outlet is supplementary to renovate the input image itself in order to identify the shared decoder and execute additional controls on its layers. Based on initial segmentation of Connected-UNets, fully connected 3D provisional unsystematic domain is used to enhance segmentation outcomes by discovering 2D neighbor areas and 3D volume statistics. Moreover, 3D connected modules evaluation is used to endure around large modules and decrease segmentation noise. RESULTS The proposed method has been assessed on two widely offered datasets, explicitly INbreast and the curated breast imaging subset of digital database for screening mammography The proposed model has also been estimated using a private dataset. CONCLUSION The experimental results show that the proposed model outperforms the state-of-the-art methods for breast tumor segmentation.
Collapse
Affiliation(s)
- K Balaji
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, 632014 India.
| |
Collapse
|
12
|
Ribeiro RF, Torres HR, Oliveira B, Morais P, Vilaca JL. Comparative analysis of deep learning methods for lesion detection on full screening mammography. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082575 DOI: 10.1109/embc40787.2023.10340501] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Breast cancer is the most prevalent type of cancer in women. Although mammography is used as the main imaging modality for the diagnosis, robust lesion detection in mammography images is a challenging task, due to the poor contrast of the lesion boundaries and the widely diverse sizes and shapes of the lesions. Deep Learning techniques have been explored to facilitate automatic diagnosis and have produced outstanding outcomes when used for different medical challenges. This study provides a benchmark for breast lesion detection in mammography images. Five state-of-art methods were evaluated on 1592 mammograms from a publicly available dataset (CBIS-DDSM) and compared considering the following seven metrics: i) mean Average Precision (mAP); ii) intersection over union; iii) precision; iv) recall; v) True Positive Rate (TPR); and vi) false positive per image. The CenterNet, YOLOv5, Faster-R-CNN, EfficientDet, and RetinaNet architectures were trained with a combination of the L1 localization loss and L2 localization loss. Despite all evaluated networks having mAP ratings greater than 60%, two managed to stand out among the evaluated networks. In general, the results demonstrate the efficiency of the model CenterNet with Hourglass-104 as its backbone and the model YOLOv5, achieving mAP scores of 70.71% and 69.36%, and TPR scores of 96.10% and 92.19%, respectively, outperforming the state-of-the-art models.Clinical Relevance - This study demonstrates the effectiveness of deep learning algorithms for breast lesion detection in mammography, potentially improving the accuracy and efficiency of breast cancer diagnosis.
Collapse
|
13
|
Ozcan BB, Patel BK, Banerjee I, Dogan BE. Artificial Intelligence in Breast Imaging: Challenges of Integration Into Clinical Practice. JOURNAL OF BREAST IMAGING 2023; 5:248-257. [PMID: 38416888 DOI: 10.1093/jbi/wbad007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Indexed: 03/01/2024]
Abstract
Artificial intelligence (AI) in breast imaging is a rapidly developing field with promising results. Despite the large number of recent publications in this field, unanswered questions have led to limited implementation of AI into daily clinical practice for breast radiologists. This paper provides an overview of the key limitations of AI in breast imaging including, but not limited to, limited numbers of FDA-approved algorithms and annotated data sets with histologic ground truth; concerns surrounding data privacy, security, algorithm transparency, and bias; and ethical issues. Ultimately, the successful implementation of AI into clinical care will require thoughtful action to address these challenges, transparency, and sharing of AI implementation workflows, limitations, and performance metrics within the breast imaging community and other end-users.
Collapse
Affiliation(s)
- B Bersu Ozcan
- The University of Texas Southwestern Medical Center, Department of Radiology, Dallas, TX, USA
| | | | - Imon Banerjee
- Mayo Clinic, Department of Radiology, Scottsdale, AZ, USA
| | - Basak E Dogan
- The University of Texas Southwestern Medical Center, Department of Radiology, Dallas, TX, USA
| |
Collapse
|
14
|
Kulkarni S, Rabidas R. Fully convolutional network for automated detection and diagnosis of mammographic masses. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-22. [PMID: 37362703 PMCID: PMC10169189 DOI: 10.1007/s11042-023-14757-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 01/19/2022] [Accepted: 02/05/2023] [Indexed: 06/28/2023]
Abstract
Breast cancer, though rare in male, is very frequent in female and has high mortality rate which can be reduced if detected and diagnosed at the early stage. Thus, in this paper, deep learning architecture based on U-Net is proposed for the detection of breast masses and its characterization as benign or malignant. The evaluation of the proposed architecture in detection is carried out on two benchmark datasets- INbreast and DDSM and achieved a true positive rate of 99.64% at 0.25 false positives per image for INbreast dataset while the same for DDSM are 97.36% and 0.38 FPs/I, respectively. For mass characterization, an accuracy of 97.39% with an AUC of 0.97 is obtained for INbreast while the same for DDSM are 96.81%, and 0.96, respectively. The measured results are further compared with the state-of-the-art techniques where the introduced scheme takes an edge over others.
Collapse
Affiliation(s)
- Sujata Kulkarni
- Department of Electronics & Communication Engineering, Assam University, Silchar, 788010 Assam India
| | - Rinku Rabidas
- Department of Electronics & Communication Engineering, Assam University, Silchar, 788010 Assam India
| |
Collapse
|
15
|
Loizidou K, Elia R, Pitris C. Computer-aided breast cancer detection and classification in mammography: A comprehensive review. Comput Biol Med 2023; 153:106554. [PMID: 36646021 DOI: 10.1016/j.compbiomed.2023.106554] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/13/2022] [Accepted: 01/11/2023] [Indexed: 01/15/2023]
Abstract
Cancer is the second cause of mortality worldwide and it has been identified as a perilous disease. Breast cancer accounts for ∼20% of all new cancer cases worldwide, making it a major cause of morbidity and mortality. Mammography is an effective screening tool for the early detection and management of breast cancer. However, the identification and interpretation of breast lesions is challenging even for expert radiologists. For that reason, several Computer-Aided Diagnosis (CAD) systems are being developed to assist radiologists to accurately detect and/or classify breast cancer. This review examines the recent literature on the automatic detection and/or classification of breast cancer in mammograms, using both conventional feature-based machine learning and deep learning algorithms. The review begins with a comparison of algorithms developed specifically for the detection and/or classification of two types of breast abnormalities, micro-calcifications and masses, followed by the use of sequential mammograms for improving the performance of the algorithms. The available Food and Drug Administration (FDA) approved CAD systems related to triage and diagnosis of breast cancer in mammograms are subsequently presented. Finally, a description of the open access mammography datasets is provided and the potential opportunities for future work in this field are highlighted. The comprehensive review provided here can serve both as a thorough introduction to the field but also provide indicative directions to guide future applications.
Collapse
Affiliation(s)
- Kosmia Loizidou
- KIOS Research and Innovation Center of Excellence, Department of Electrical and Computer Engineering, University of Cyprus, Nicosia, Cyprus.
| | - Rafaella Elia
- KIOS Research and Innovation Center of Excellence, Department of Electrical and Computer Engineering, University of Cyprus, Nicosia, Cyprus.
| | - Costas Pitris
- KIOS Research and Innovation Center of Excellence, Department of Electrical and Computer Engineering, University of Cyprus, Nicosia, Cyprus.
| |
Collapse
|
16
|
Alhares H, Tanha J, Balafar MA. AMTLDC: a new adversarial multi-source transfer learning framework to diagnosis of COVID-19. EVOLVING SYSTEMS 2023; 14:1-15. [PMID: 38625255 PMCID: PMC9838404 DOI: 10.1007/s12530-023-09484-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 01/02/2023] [Indexed: 01/13/2023]
Abstract
In recent years, deep learning techniques have been widely used to diagnose diseases. However, in some tasks, such as the diagnosis of COVID-19 disease, due to insufficient data, the model is not properly trained and as a result, the generalizability of the model decreases. For example, if the model is trained on a CT scan dataset and tested on another CT scan dataset, it predicts near-random results. To address this, data from several different sources can be combined using transfer learning, taking into account the intrinsic and natural differences in existing datasets obtained with different medical imaging tools and approaches. In this paper, to improve the transfer learning technique and better generalizability between multiple data sources, we propose a multi-source adversarial transfer learning model, namely AMTLDC. In AMTLDC, representations are learned that are similar among the sources. In other words, extracted representations are general and not dependent on the particular dataset domain. We apply the AMTLDC to predict Covid-19 from medical images using a convolutional neural network. We show that accuracy can be improved using the AMTLDC framework, and surpass the results of current successful transfer learning approaches. In particular, we show that the AMTLDC works well when using different dataset domains, or when there is insufficient data.
Collapse
Affiliation(s)
- Hadi Alhares
- Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, 29th Bahman Blvd, Tabriz, 5166616471 Iran
| | - Jafar Tanha
- Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, 29th Bahman Blvd, Tabriz, 5166616471 Iran
| | - Mohammad Ali Balafar
- Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, 29th Bahman Blvd, Tabriz, 5166616471 Iran
| |
Collapse
|
17
|
Das HS, Das A, Neog A, Mallik S, Bora K, Zhao Z. Breast cancer detection: Shallow convolutional neural network against deep convolutional neural networks based approach. Front Genet 2023; 13:1097207. [PMID: 36685963 PMCID: PMC9846574 DOI: 10.3389/fgene.2022.1097207] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 12/15/2022] [Indexed: 01/06/2023] Open
Abstract
Introduction: Of all the cancers that afflict women, breast cancer (BC) has the second-highest mortality rate, and it is also believed to be the primary cause of the high death rate. Breast cancer is the most common cancer that affects women globally. There are two types of breast tumors: benign (less harmful and unlikely to become breast cancer) and malignant (which are very dangerous and might result in aberrant cells that could result in cancer). Methods: To find breast abnormalities like masses and micro-calcifications, competent and educated radiologists often examine mammographic images. This study focuses on computer-aided diagnosis to help radiologists make more precise diagnoses of breast cancer. This study aims to compare and examine the performance of the proposed shallow convolutional neural network architecture having different specifications against pre-trained deep convolutional neural network architectures trained on mammography images. Mammogram images are pre-processed in this study's initial attempt to carry out the automatic identification of BC. Thereafter, three different types of shallow convolutional neural networks with representational differences are then fed with the resulting data. In the second method, transfer learning via fine-tuning is used to feed the same collection of images into pre-trained convolutional neural networks VGG19, ResNet50, MobileNet-v2, Inception-v3, Xception, and Inception-ResNet-v2. Results: In our experiment with two datasets, the accuracy for the CBIS-DDSM and INbreast datasets are 80.4%, 89.2%, and 87.8%, 95.1% respectively. Discussion: It can be concluded from the experimental findings that the deep network-based approach with precise tuning outperforms all other state-of-the-art techniques in experiments on both datasets.
Collapse
Affiliation(s)
- Himanish Shekhar Das
- Department of Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Akalpita Das
- Department of Computer Science and Engineering, GIMT Guwahati, Guwahati, India
| | - Anupal Neog
- Department of AI and Machine Learning COE, IQVIA, Bengaluru, Karnataka, India
| | - Saurav Mallik
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, United States
- Department of Pharmacology and Toxicology, University of Arizona, Tucson, AZ, United States
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Pathology and Laboratory Medicine, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
18
|
Efficient Breast Cancer Diagnosis from Complex Mammographic Images Using Deep Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:7717712. [PMID: 36909966 PMCID: PMC9998154 DOI: 10.1155/2023/7717712] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 02/15/2023] [Accepted: 02/23/2023] [Indexed: 03/06/2023]
Abstract
Medical image analysis places a significant focus on breast cancer, which poses a significant threat to women's health and contributes to many fatalities. An early and precise diagnosis of breast cancer through digital mammograms can significantly improve the accuracy of disease detection. Computer-aided diagnosis (CAD) systems must analyze the medical imagery and perform detection, segmentation, and classification processes to assist radiologists with accurately detecting breast lesions. However, early-stage mammography cancer detection is certainly difficult. The deep convolutional neural network has demonstrated exceptional results and is considered a highly effective tool in the field. This study proposes a computational framework for diagnosing breast cancer using a ResNet-50 convolutional neural network to classify mammogram images. To train and classify the INbreast dataset into benign or malignant categories, the framework utilizes transfer learning from the pretrained ResNet-50 CNN on ImageNet. The results revealed that the proposed framework achieved an outstanding classification accuracy of 93%, surpassing other models trained on the same dataset. This novel approach facilitates early diagnosis and classification of malignant and benign breast cancer, potentially saving lives and resources. These outcomes highlight that deep convolutional neural network algorithms can be trained to achieve highly accurate results in various mammograms, along with the capacity to enhance medical tools by reducing the error rate in screening mammograms.
Collapse
|
19
|
Classification of Multiclass Histopathological Breast Images Using Residual Deep Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9086060. [PMID: 36262625 PMCID: PMC9576372 DOI: 10.1155/2022/9086060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 07/21/2022] [Accepted: 08/29/2022] [Indexed: 11/20/2022]
Abstract
Pathologists need a lot of clinical experience and time to do the histopathological investigation. AI may play a significant role in supporting pathologists and resulting in more accurate and efficient histopathological diagnoses. Breast cancer is one of the most diagnosed cancers in women worldwide. Breast cancer may be detected and diagnosed using imaging methods such as histopathological images. Since various tissues make up the breast, there is a wide range of textural intensity, making abnormality detection difficult. As a result, there is an urgent need to improve computer-assisted systems (CAD) that can serve as a second opinion for radiologists when they use medical images. A self-training learning method employing deep learning neural network with residual learning is proposed to overcome the issue of needing a large number of labeled images to train deep learning models in breast cancer histopathology image classification. The suggested model is built from scratch and trained.
Collapse
|
20
|
Classification of Breast Cancer Histopathological Images Using DenseNet and Transfer Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8904768. [PMID: 36262621 PMCID: PMC9576400 DOI: 10.1155/2022/8904768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/19/2022] [Accepted: 07/30/2022] [Indexed: 11/22/2022]
Abstract
Breast cancer is one of the most common invading cancers in women. Analyzing breast cancer is nontrivial and may lead to disagreements among experts. Although deep learning methods achieved an excellent performance in classification tasks including breast cancer histopathological images, the existing state-of-the-art methods are computationally expensive and may overfit due to extracting features from in-distribution images. In this paper, our contribution is mainly twofold. First, we perform a short survey on deep-learning-based models for classifying histopathological images to investigate the most popular and optimized training-testing ratios. Our findings reveal that the most popular training-testing ratio for histopathological image classification is 70%: 30%, whereas the best performance (e.g., accuracy) is achieved by using the training-testing ratio of 80%: 20% on an identical dataset. Second, we propose a method named DenTnet to classify breast cancer histopathological images chiefly. DenTnet utilizes the principle of transfer learning to solve the problem of extracting features from the same distribution using DenseNet as a backbone model. The proposed DenTnet method is shown to be superior in comparison to a number of leading deep learning methods in terms of detection accuracy (up to 99.28% on BreaKHis dataset deeming training-testing ratio of 80%: 20%) with good generalization ability and computational speed. The limitation of existing methods including the requirement of high computation and utilization of the same feature distribution is mitigated by dint of the DenTnet.
Collapse
|
21
|
Garrucho L, Kushibar K, Jouide S, Diaz O, Igual L, Lekadir K. Domain generalization in deep learning based mass detection in mammography: A large-scale multi-center study. Artif Intell Med 2022; 132:102386. [PMID: 36207090 DOI: 10.1016/j.artmed.2022.102386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 08/07/2022] [Accepted: 08/19/2022] [Indexed: 11/02/2022]
Abstract
Computer-aided detection systems based on deep learning have shown great potential in breast cancer detection. However, the lack of domain generalization of artificial neural networks is an important obstacle to their deployment in changing clinical environments. In this study, we explored the domain generalization of deep learning methods for mass detection in digital mammography and analyzed in-depth the sources of domain shift in a large-scale multi-center setting. To this end, we compared the performance of eight state-of-the-art detection methods, including Transformer based models, trained in a single domain and tested in five unseen domains. Moreover, a single-source mass detection training pipeline was designed to improve the domain generalization without requiring images from the new domain. The results show that our workflow generalized better than state-of-the-art transfer learning based approaches in four out of five domains while reducing the domain shift caused by the different acquisition protocols and scanner manufacturers. Subsequently, an extensive analysis was performed to identify the covariate shifts with the greatest effects on detection performance, such as those due to differences in patient age, breast density, mass size, and mass malignancy. Ultimately, this comprehensive study provides key insights and best practices for future research on domain generalization in deep learning based breast cancer detection.
Collapse
Affiliation(s)
- Lidia Garrucho
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain.
| | - Kaisar Kushibar
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain
| | - Socayna Jouide
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain
| | - Oliver Diaz
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain
| | - Laura Igual
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain
| | - Karim Lekadir
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain
| |
Collapse
|
22
|
Liu Y, Zhang F, Chen C, Wang S, Wang Y, Yu Y. Act Like a Radiologist: Towards Reliable Multi-View Correspondence Reasoning for Mammogram Mass Detection. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:5947-5961. [PMID: 34061740 DOI: 10.1109/tpami.2021.3085783] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Mammogram mass detection is crucial for diagnosing and preventing the breast cancers in clinical practice. The complementary effect of multi-view mammogram images provides valuable information about the breast anatomical prior structure and is of great significance in digital mammography interpretation. However, unlike radiologists who can utilize the natural reasoning ability to identify masses based on multiple mammographic views, how to endow the existing object detection models with the capability of multi-view reasoning is vital for decision-making in clinical diagnosis but remains the boundary to explore. In this paper, we propose an anatomy-aware graph convolutional network (AGN), which is tailored for mammogram mass detection and endows existing detection methods with multi-view reasoning ability. The proposed AGN consists of three steps. First, we introduce a bipartite graph convolutional network (BGN) to model the intrinsic geometric and semantic relations of ipsilateral views. Second, considering that the visual asymmetry of bilateral views is widely adopted in clinical practice to assist the diagnosis of breast lesions, we propose an inception graph convolutional network (IGN) to model the structural similarities of bilateral views. Finally, based on the constructed graphs, the multi-view information is propagated through nodes methodically, which equips the features learned from the examined view with multi-view reasoning ability. Experiments on two standard benchmarks reveal that AGN significantly exceeds the state-of-the-art performance. Visualization results show that AGN provides interpretable visual cues for clinical diagnosis.
Collapse
|
23
|
|
24
|
Malliori A, Pallikarakis N. Breast cancer detection using machine learning in digital mammography and breast tomosynthesis: A systematic review. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00693-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
25
|
Baghdadi NA, Malki A, Magdy Balaha H, AbdulAzeem Y, Badawy M, Elhosseini M. Classification of breast cancer using a manta-ray foraging optimized transfer learning framework. PeerJ Comput Sci 2022; 8:e1054. [PMID: 36092017 PMCID: PMC9454783 DOI: 10.7717/peerj-cs.1054] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 07/07/2022] [Indexed: 06/15/2023]
Abstract
Due to its high prevalence and wide dissemination, breast cancer is a particularly dangerous disease. Breast cancer survival chances can be improved by early detection and diagnosis. For medical image analyzers, diagnosing is tough, time-consuming, routine, and repetitive. Medical image analysis could be a useful method for detecting such a disease. Recently, artificial intelligence technology has been utilized to help radiologists identify breast cancer more rapidly and reliably. Convolutional neural networks, among other technologies, are promising medical image recognition and classification tools. This study proposes a framework for automatic and reliable breast cancer classification based on histological and ultrasound data. The system is built on CNN and employs transfer learning technology and metaheuristic optimization. The Manta Ray Foraging Optimization (MRFO) approach is deployed to improve the framework's adaptability. Using the Breast Cancer Dataset (two classes) and the Breast Ultrasound Dataset (three-classes), eight modern pre-trained CNN architectures are examined to apply the transfer learning technique. The framework uses MRFO to improve the performance of CNN architectures by optimizing their hyperparameters. Extensive experiments have recorded performance parameters, including accuracy, AUC, precision, F1-score, sensitivity, dice, recall, IoU, and cosine similarity. The proposed framework scored 97.73% on histopathological data and 99.01% on ultrasound data in terms of accuracy. The experimental results show that the proposed framework is superior to other state-of-the-art approaches in the literature review.
Collapse
Affiliation(s)
- Nadiah A. Baghdadi
- College of Nursing, Nursing Management and Education Department, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Amer Malki
- College of Computer Science and Engineering, Taibah University, Yanbu, Saudi Arabia
| | - Hossam Magdy Balaha
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Yousry AbdulAzeem
- Computer Engineering Department, Misr Higher Institute for Engineering and Technology, Mansoura, Egypt
| | - Mahmoud Badawy
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Mostafa Elhosseini
- College of Computer Science and Engineering, Taibah University, Yanbu, Saudi Arabia
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| |
Collapse
|
26
|
Rajasree PM, Jatti A, Santosh D, Desai U, Krishnappa VD. Breast Masses Detection and Segmentation in Full-Field Digital Mammograms using Unified Convolution Neural Network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1002-1007. [PMID: 36085669 DOI: 10.1109/embc48229.2022.9871866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Breast Cancer has been the primary reason for mortality in women of age between twenties and sixties worldwide; moreover early detection and treatment provides patients to get absolute treatment and decrease the mortality rate. Furthermore, recent research indicates that most experienced physicians have plenty of limitations, hence the plethora of work has been carried out to develop an automated mechanism of segmentation and classification of affected area and type of cancer; however, it is still considered to be highly challenging due to the variability of tumor in shape, low signal to noise ratio, shape, size and location of tumor. Furthermore, mammographic mass segmentation and detection are performed as a separate task and a convolution neural network is a highly adopted architecture for the same. In this research, we have designed and developed unified CNN architecture to perform the segmentation and detection of a breast mass. The unified-CNN architecture comprises a novel module for convolution which is combined through additional offset. Further RRS aka Random Region Selection mechanism is applied for data augmentation approach and high-level feature map is implied to achieve the high prediction. Furthermore, unified-CNN is evaluated using the metrics like true positive Rate at FPI (False Positive per Image) and Dice Index on INBreast dataset, also comparative analysis is out carried with various existing methodology. Unified-CNN is developed through improvising CNN. It introduces a novel module at the convolution layer to aim for a high-level feature map in order to get a high prediction. RRS (Random Region Selection) algorithm is used as the data augmentation approach to select the boundary region of the affected area; further robust model training is designed and optimized for process to make optimal. Unified-CNN introduces novel module at the convolution layer to aim for high level feature map in order to get high prediction; further ROI pooling is utilized for boundary detection in images.
Collapse
|
27
|
Chen X, Zhang K, Abdoli N, Gilley PW, Wang X, Liu H, Zheng B, Qiu Y. Transformers Improve Breast Cancer Diagnosis from Unregistered Multi-View Mammograms. Diagnostics (Basel) 2022; 12:diagnostics12071549. [PMID: 35885455 PMCID: PMC9320758 DOI: 10.3390/diagnostics12071549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 06/21/2022] [Accepted: 06/24/2022] [Indexed: 11/16/2022] Open
Abstract
Deep convolutional neural networks (CNNs) have been widely used in various medical imaging tasks. However, due to the intrinsic locality of convolution operations, CNNs generally cannot model long-range dependencies well, which are important for accurately identifying or mapping corresponding breast lesion features computed from unregistered multiple mammograms. This motivated us to leverage the architecture of Multi-view Vision Transformers to capture long-range relationships of multiple mammograms from the same patient in one examination. For this purpose, we employed local transformer blocks to separately learn patch relationships within four mammograms acquired from two-view (CC/MLO) of two-side (right/left) breasts. The outputs from different views and sides were concatenated and fed into global transformer blocks, to jointly learn patch relationships between four images representing two different views of the left and right breasts. To evaluate the proposed model, we retrospectively assembled a dataset involving 949 sets of mammograms, which included 470 malignant cases and 479 normal or benign cases. We trained and evaluated the model using a five-fold cross-validation method. Without any arduous preprocessing steps (e.g., optimal window cropping, chest wall or pectoral muscle removal, two-view image registration, etc.), our four-image (two-view-two-side) transformer-based model achieves case classification performance with an area under ROC curve (AUC = 0.818 ± 0.039), which significantly outperforms AUC = 0.784 ± 0.016 achieved by the state-of-the-art multi-view CNNs (p = 0.009). It also outperforms two one-view-two-side models that achieve AUC of 0.724 ± 0.013 (CC view) and 0.769 ± 0.036 (MLO view), respectively. The study demonstrates the potential of using transformers to develop high-performing computer-aided diagnosis schemes that combine four mammograms.
Collapse
Affiliation(s)
- Xuxin Chen
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
- Correspondence: (X.C.); (Y.Q.)
| | - Ke Zhang
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Neman Abdoli
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
| | - Patrik W. Gilley
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
| | | | - Hong Liu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
- Correspondence: (X.C.); (Y.Q.)
| |
Collapse
|
28
|
Forrai G, Kovács E, Ambrózay É, Barta M, Borbély K, Lengyel Z, Ormándi K, Péntek Z, Tünde T, Sebő É. Use of Diagnostic Imaging Modalities in Modern Screening, Diagnostics and Management of Breast Tumours 1st Central-Eastern European Professional Consensus Statement on Breast Cancer. Pathol Oncol Res 2022; 28:1610382. [PMID: 35755417 PMCID: PMC9214693 DOI: 10.3389/pore.2022.1610382] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Accepted: 04/29/2022] [Indexed: 12/11/2022]
Abstract
Breast radiologists and nuclear medicine specialists updated their previous recommendation/guidance at the 4th Hungarian Breast Cancer Consensus Conference in Kecskemét. A recommendation is hereby made that breast tumours should be screened, diagnosed and treated according to these guidelines. These professional guidelines include the latest technical developments and research findings, including the role of imaging methods in therapy and follow-up. It includes details on domestic development proposals and also addresses related areas (forensic medicine, media, regulations, reimbursement). The entire material has been agreed with the related medical disciplines.
Collapse
Affiliation(s)
- Gábor Forrai
- GÉ-RAD Kft., Budapest, Hungary
- Duna Medical Center, Budapest, Hungary
| | - Eszter Kovács
- GÉ-RAD Kft., Budapest, Hungary
- Duna Medical Center, Budapest, Hungary
| | | | | | - Katalin Borbély
- National Institute of Oncology, Budapest, Hungary
- Ministry of Human Capacities, Budapest, Hungary
| | | | | | | | - Tasnádi Tünde
- Dr Réthy Pál Member Hospital of Békés County Central Hospital, Békéscsaba, Hungary
| | - Éva Sebő
- Kenézy Gyula University Hospital, University of Debrecen, Debrecen, Hungary
| |
Collapse
|
29
|
|
30
|
Baccouche A, Garcia-Zapirain B, Zheng Y, Elmaghraby AS. Early detection and classification of abnormality in prior mammograms using image-to-image translation and YOLO techniques. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106884. [PMID: 35594582 DOI: 10.1016/j.cmpb.2022.106884] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 04/27/2022] [Accepted: 05/10/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer-aided-detection (CAD) systems have been developed to assist radiologists on finding suspicious lesions in mammogram. Deep Learning technology have recently succeeded to increase the chance of recognizing abnormality at an early stage in order to avoid unnecessary biopsies and decrease the mortality rate. In this study, we investigated the effectiveness of an end-to-end fusion model based on You-Only-Look-Once (YOLO) architecture, to simultaneously detect and classify suspicious breast lesions on digital mammograms. Four categories of cases were included: Mass, Calcification, Architectural Distortions, and Normal from a private digital mammographic database including 413 cases. For all cases, Prior mammograms (typically scanned 1 year before) were all reported as Normal, while Current mammograms were diagnosed as cancerous (confirmed by biopsies) or healthy. METHODS We propose to apply the YOLO-based fusion model to the Current mammograms for breast lesions detection and classification. Then apply the same model retrospectively to synthetic mammograms for an early cancer prediction, where the synthetic mammograms were generated from the Prior mammograms by using the image-to-image translation models, CycleGAN and Pix2Pix. RESULTS Evaluation results showed that our methodology could significantly detect and classify breast lesions on Current mammograms with a highest rate of 93% ± 0.118 for Mass lesions, 88% ± 0.09 for Calcification lesions, and 95% ± 0.06 for Architectural Distortion lesions. In addition, we reported evaluation results on Prior mammograms with a highest rate of 36% ± 0.01 for Mass lesions, 14% ± 0.01 for Calcification lesions, and 50% ± 0.02 for Architectural Distortion lesions. Normal mammograms were accordingly classified with an accuracy rate of 92% ± 0.09 and 90% ± 0.06 respectively on Current and Prior exams. CONCLUSIONS Our proposed framework was first developed to help detecting and identifying suspicious breast lesions in X-ray mammograms on their Current screening. The work was also suggested to reduce the temporal changes between pairs of Prior and follow-up screenings for early predicting the location and type of abnormalities in Prior mammogram screening. The paper presented a CAD method to assist doctors and experts to identify the risk of breast cancer presence. Overall, the proposed CAD method incorporates the advances of image processing, deep learning and image-to-image translation for a biomedical application.
Collapse
Affiliation(s)
- Asma Baccouche
- Department of Computer Science and Engineering, University of Louisville, Louisville, KY, 40292, USA.
| | | | - Yufeng Zheng
- University of Mississippi Medical Center, Jackson, MS, 39216, USA
| | - Adel S Elmaghraby
- Department of Computer Science and Engineering, University of Louisville, Louisville, KY, 40292, USA
| |
Collapse
|
31
|
Improved Deep Convolutional Neural Networks via Boosting for Predicting the Quality of In Vitro Bovine Embryos. ELECTRONICS 2022. [DOI: 10.3390/electronics11091363] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Automated diagnosis for the quality of bovine in vitro-derived embryos based on imaging data is an important research problem in developmental biology. By predicting the quality of embryos correctly, embryologists can (1) avoid the time-consuming and tedious work of subjective visual examination to assess the quality of embryos; (2) automatically perform real-time evaluation of embryos, which accelerates the examination process; and (3) possibly avoid the economic, social, and medical implications caused by poor-quality embryos. While generated embryo images provide an opportunity for analyzing such images, there is a lack of consistent noninvasive methods utilizing deep learning to assess the quality of embryos. Hence, designing high-performance deep learning algorithms is crucial for data analysts who work with embryologists. A key goal of this study is to provide advanced deep learning tools to embryologists, who would, in turn, use them as prediction calculators to evaluate the quality of embryos. The proposed deep learning approaches utilize a modified convolutional neural network, with or without boosting techniques, to improve the prediction performance. Experimental results on image data pertaining to in vitro bovine embryos show that our proposed deep learning approaches perform better than existing baseline approaches in terms of prediction performance and statistical significance.
Collapse
|
32
|
Wimmer M, Sluiter G, Major D, Lenis D, Berg A, Neubauer T, Buhler K. Multi-Task Fusion for Improving Mammography Screening Data Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:937-950. [PMID: 34788218 DOI: 10.1109/tmi.2021.3129068] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Machine learning and deep learning methods have become essential for computer-assisted prediction in medicine, with a growing number of applications also in the field of mammography. Typically these algorithms are trained for a specific task, e.g., the classification of lesions or the prediction of a mammogram's pathology status. To obtain a comprehensive view of a patient, models which were all trained for the same task(s) are subsequently ensembled or combined. In this work, we propose a pipeline approach, where we first train a set of individual, task-specific models and subsequently investigate the fusion thereof, which is in contrast to the standard model ensembling strategy. We fuse model predictions and high-level features from deep learning models with hybrid patient models to build stronger predictors on patient level. To this end, we propose a multi-branch deep learning model which efficiently fuses features across different tasks and mammograms to obtain a comprehensive patient-level prediction. We train and evaluate our full pipeline on public mammography data, i.e., DDSM and its curated version CBIS-DDSM, and report an AUC score of 0.962 for predicting the presence of any lesion and 0.791 for predicting the presence of malignant lesions on patient level. Overall, our fusion approaches improve AUC scores significantly by up to 0.04 compared to standard model ensembling. Moreover, by providing not only global patient-level predictions but also task-specific model results that are related to radiological features, our pipeline aims to closely support the reading workflow of radiologists.
Collapse
|
33
|
Bhalla D, Ramachandran A, Rangarajan K, Dhanakshirur R, Banerjee S, Arora C. Basic Principles AI Simplified For A Medical Practitioner: Pearls And Pitfalls In Evaluating AI Algorithms. Curr Probl Diagn Radiol 2022; 52:47-55. [DOI: 10.1067/j.cpradiol.2022.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 03/14/2022] [Accepted: 04/18/2022] [Indexed: 11/22/2022]
|
34
|
Ueda D, Yamamoto A, Onoda N, Takashima T, Noda S, Kashiwagi S, Morisaki T, Fukumoto S, Shiba M, Morimura M, Shimono T, Kageyama K, Tatekawa H, Murai K, Honjo T, Shimazaki A, Kabata D, Miki Y. Development and validation of a deep learning model for detection of breast cancers in mammography from multi-institutional datasets. PLoS One 2022; 17:e0265751. [PMID: 35324962 PMCID: PMC8947392 DOI: 10.1371/journal.pone.0265751] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Accepted: 03/07/2022] [Indexed: 12/24/2022] Open
Abstract
Objectives The objective of this study was to develop and validate a state-of-the-art, deep learning (DL)-based model for detecting breast cancers on mammography. Methods Mammograms in a hospital development dataset, a hospital test dataset, and a clinic test dataset were retrospectively collected from January 2006 through December 2017 in Osaka City University Hospital and Medcity21 Clinic. The hospital development dataset and a publicly available digital database for screening mammography (DDSM) dataset were used to train and to validate the RetinaNet, one type of DL-based model, with five-fold cross-validation. The model’s sensitivity and mean false positive indications per image (mFPI) and partial area under the curve (AUC) with 1.0 mFPI for both test datasets were externally assessed with the test datasets. Results The hospital development dataset, hospital test dataset, clinic test dataset, and DDSM development dataset included a total of 3179 images (1448 malignant images), 491 images (225 malignant images), 2821 images (37 malignant images), and 1457 malignant images, respectively. The proposed model detected all cancers with a 0.45–0.47 mFPI and had partial AUCs of 0.93 in both test datasets. Conclusions The DL-based model developed for this study was able to detect all breast cancers with a very low mFPI. Our DL-based model achieved the highest performance to date, which might lead to improved diagnosis for breast cancer.
Collapse
Affiliation(s)
- Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
- * E-mail:
| | - Akira Yamamoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Naoyoshi Onoda
- Department of Breast and Endocrine Surgery, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Tsutomu Takashima
- Department of Breast and Endocrine Surgery, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Satoru Noda
- Department of Breast and Endocrine Surgery, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Shinichiro Kashiwagi
- Department of Breast and Endocrine Surgery, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Tamami Morisaki
- Department of Breast and Endocrine Surgery, Graduate School of Medicine, Osaka City University, Osaka, Japan
- Department of Premier Preventive Medicine, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Shinya Fukumoto
- Department of Premier Preventive Medicine, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Masatsugu Shiba
- Department of Gastroenterology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Mina Morimura
- Department of General Practice, Osaka City University Hospital, Osaka, Japan
| | - Taro Shimono
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Ken Kageyama
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Hiroyuki Tatekawa
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Kazuki Murai
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Takashi Honjo
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Akitoshi Shimazaki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Daijiro Kabata
- Department of Medical Statistics, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| |
Collapse
|
35
|
TTCNN: A Breast Cancer Detection and Classification towards Computer-Aided Diagnosis Using Digital Mammography in Early Stages. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073273] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Breast cancer is a major research area in the medical image analysis field; it is a dangerous disease and a major cause of death among women. Early and accurate diagnosis of breast cancer based on digital mammograms can enhance disease detection accuracy. Medical imagery must be detected, segmented, and classified for computer-aided diagnosis (CAD) systems to help the radiologists for accurate diagnosis of breast lesions. Therefore, an accurate breast cancer detection and classification approach is proposed for screening of mammograms. In this paper, we present a deep learning system that can identify breast cancer in mammogram screening images using an “end-to-end” training strategy that efficiently uses mammography images for computer-aided breast cancer recognition in the early stages. First, the proposed approach implements the modified contrast enhancement method in order to refine the detail of edges from the source mammogram images. Next, the transferable texture convolutional neural network (TTCNN) is presented to enhance the performance of classification and the energy layer is integrated in this work to extract the texture features from the convolutional layer. The proposed approach consists of only three layers of convolution and one energy layer, rather than the pooling layer. In the third stage, we analyzed the performance of TTCNN based on deep features of convolutional neural network models (InceptionResNet-V2, Inception-V3, VGG-16, VGG-19, GoogLeNet, ResNet-18, ResNet-50, and ResNet-101). The deep features are extracted by determining the best layers which enhance the classification accuracy. In the fourth stage, by using the convolutional sparse image decomposition approach, all the extracted feature vectors are fused and, finally, the best features are selected by using the entropy controlled firefly method. The proposed approach employed on DDSM, INbreast, and MIAS datasets and attained the average accuracy of 97.49%. Our proposed transferable texture CNN-based method for classifying screening mammograms has outperformed prior methods. These findings demonstrate that automatic deep learning algorithms can be easily trained to achieve high accuracy in diverse mammography images, and can offer great potential to improve clinical tools to minimize false positive and false negative screening mammography results.
Collapse
|
36
|
Ayana G, Park J, Choe SW. Patchless Multi-Stage Transfer Learning for Improved Mammographic Breast Mass Classification. Cancers (Basel) 2022; 14:cancers14051280. [PMID: 35267587 PMCID: PMC8909211 DOI: 10.3390/cancers14051280] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 02/22/2022] [Accepted: 02/24/2022] [Indexed: 02/01/2023] Open
Abstract
Simple Summary In this study, we propose a novel deep-learning method based on multi-stage transfer learning (MSTL) from ImageNet and cancer cell line image pre-trained models to classify mammographic masses as either benign or malignant. The proposed method alleviates the challenge of obtaining large amounts of labeled mammogram training data by utilizing a large number of cancer cell line microscopic images as an intermediate domain of learning between the natural domain (ImageNet) and medical domain (mammography). Moreover, our method does not utilize patch separation (to segment the region of interest before classification), which renders it computationally simple and fast compared to previous studies. The findings of this study are of crucial importance in the early diagnosis of breast cancer in young women with dense breasts because mammography does not provide reliable diagnosis in such cases. Abstract Despite great achievements in classifying mammographic breast-mass images via deep-learning (DL), obtaining large amounts of training data and ensuring generalizations across different datasets with robust and well-optimized algorithms remain a challenge. ImageNet-based transfer learning (TL) and patch classifiers have been utilized to address these challenges. However, researchers have been unable to achieve the desired performance for DL to be used as a standalone tool. In this study, we propose a novel multi-stage TL from ImageNet and cancer cell line image pre-trained models to classify mammographic breast masses as either benign or malignant. We trained our model on three public datasets: Digital Database for Screening Mammography (DDSM), INbreast, and Mammographic Image Analysis Society (MIAS). In addition, a mixed dataset of the images from these three datasets was used to train the model. We obtained an average five-fold cross validation AUC of 1, 0.9994, 0.9993, and 0.9998 for DDSM, INbreast, MIAS, and mixed datasets, respectively. Moreover, the observed performance improvement using our method against the patch-based method was statistically significant, with a p-value of 0.0029. Furthermore, our patchless approach performed better than patch- and whole image-based methods, improving test accuracy by 8% (91.41% vs. 99.34%), tested on the INbreast dataset. The proposed method is of significant importance in solving the need for a large training dataset as well as reducing the computational burden in training and implementing the mammography-based deep-learning models for early diagnosis of breast cancer.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea; (G.A.); (J.P.)
| | - Jinhyung Park
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea; (G.A.); (J.P.)
| | - Se-woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea; (G.A.); (J.P.)
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
- Correspondence: ; Tel.: +82-54-478-7781; Fax: +82-54-462-1049
| |
Collapse
|
37
|
Cho HH, Kim CK, Park H. Overview of radiomics in prostate imaging and future directions. Br J Radiol 2022; 95:20210539. [PMID: 34797688 PMCID: PMC8978251 DOI: 10.1259/bjr.20210539] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
Recent advancements in imaging technology and analysis methods have led to an analytic framework known as radiomics. This framework extracts comprehensive high-dimensional features from imaging data and performs data mining to build analytical models for improved decision-support. Its features include many categories spanning texture and shape; thus, it can provide abundant information for precision medicine. Many studies of prostate radiomics have shown promising results in the assessment of pathological features, prediction of treatment response, and stratification of risk groups. Herein, we aimed to provide a general overview of radiomics procedures, discuss technical issues, explain various clinical applications, and suggest future research directions, especially for prostate imaging.
Collapse
Affiliation(s)
- Hwan-Ho Cho
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Korea.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Korea
| | - Chan Kyo Kim
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Hyunjin Park
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Korea.,School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, Korea
| |
Collapse
|
38
|
Bhuyan HK, Chakraborty C, Shelke Y, Pani SK. COVID-19 diagnosis system by deep learning approaches. EXPERT SYSTEMS 2022; 39:e12776. [PMID: 34511691 PMCID: PMC8420221 DOI: 10.1111/exsy.12776] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 05/26/2021] [Accepted: 07/01/2021] [Indexed: 05/15/2023]
Abstract
The novel coronavirus disease 2019 (COVID-19) has been a severe health issue affecting the respiratory system and spreads very fast from one human to other overall countries. For controlling such disease, limited diagnostics techniques are utilized to identify COVID-19 patients, which are not effective. The above complex circumstances need to detect suspected COVID-19 patients based on routine techniques like chest X-Rays or CT scan analysis immediately through computerized diagnosis systems such as mass detection, segmentation, and classification. In this paper, regional deep learning approaches are used to detect infected areas by the lungs' coronavirus. For mass segmentation of the infected region, a deep Convolutional Neural Network (CNN) is used to identify the specific infected area and classify it into COVID-19 or Non-COVID-19 patients with a full-resolution convolutional network (FrCN). The proposed model is experimented with based on detection, segmentation, and classification using a trained and tested COVID-19 patient dataset. The evaluation results are generated using a fourfold cross-validation test with several technical terms such as Sensitivity, Specificity, Jaccard (Jac.), Dice (F1-score), Matthews correlation coefficient (MCC), Overall accuracy, etc. The comparative performance of classification accuracy is evaluated on both with and without mass segmentation validated test dataset.
Collapse
Affiliation(s)
- Hemanta Kumar Bhuyan
- Department of Information TechnologyVignan's Foundation for Science, Technology & Research (VFSTR)GunturIndia
| | - Chinmay Chakraborty
- Electronics & Communication EngineeringBirla Institute of TechnologyMesraJharkhandIndia
| | - Yogesh Shelke
- Medical Professional and with Aranca Technology Research & AdvisoryMumbaiIndia
| | - Subhendu Kumar Pani
- Department of Computer Science & EngineeringKrupajal Computer AcademyBhubaneswarOdishaIndia
| |
Collapse
|
39
|
Khanam N, Kumar R. Recent Applications of Artificial Intelligence in Early Cancer Detection. Curr Med Chem 2022; 29:4410-4435. [PMID: 35196970 DOI: 10.2174/0929867329666220222154733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 11/30/2021] [Accepted: 12/08/2021] [Indexed: 11/22/2022]
Abstract
Cancer is a deadly disease often caused by the accumulation of various genetic mutations and pathological alterations. The death rate can only be reduced when it is detected in the early stages because treatment of cancer when the tumor has not metastasized in many regions of the body is more effective. However, early cancer detection is fraught with difficulties. Advances in artificial intelligence (AI) have developed a new scope for efficient and early detection of such a fatal disease. AI algorithms have a remarkable ability to perform well on a variety of tasks that are presented or fed to the system. Numerous studies have produced machine learning and deep learning-assisted cancer prediction models to detect cancer from previously accessible data with better accuracy, sensitivity, and specificity. It has been observed that the accuracy of prediction models in classifying fed data as benign, malignant, or normal is improved by implementing efficient image processing techniques and data segmentation augmentation methodologies, along with advanced algorithms. In this review, recent AI-based models for the diagnosis of the most prevalent cancers in the breast, lung, brain, and skin have been analysed. Available AI techniques, data preparation, modeling processes, and performance assessments have been included in the review.
Collapse
Affiliation(s)
- Nausheen Khanam
- Amity Institute of Biotechnology, Amity University Uttar Pradesh Lucknow Campus, Uttar Pradesh, India
| | - Rajnish Kumar
- Amity Institute of Biotechnology, Amity University Uttar Pradesh Lucknow Campus, Uttar Pradesh, India
| |
Collapse
|
40
|
Breast Cancer Mammograms Classification Using Deep Neural Network and Entropy-Controlled Whale Optimization Algorithm. Diagnostics (Basel) 2022; 12:diagnostics12020557. [PMID: 35204646 PMCID: PMC8871265 DOI: 10.3390/diagnostics12020557] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 01/22/2022] [Accepted: 01/30/2022] [Indexed: 02/04/2023] Open
Abstract
Breast cancer has affected many women worldwide. To perform detection and classification of breast cancer many computer-aided diagnosis (CAD) systems have been established because the inspection of the mammogram images by the radiologist is a difficult and time taken task. To early diagnose the disease and provide better treatment lot of CAD systems were established. There is still a need to improve existing CAD systems by incorporating new methods and technologies in order to provide more precise results. This paper aims to investigate ways to prevent the disease as well as to provide new methods of classification in order to reduce the risk of breast cancer in women's lives. The best feature optimization is performed to classify the results accurately. The CAD system's accuracy improved by reducing the false-positive rates.The Modified Entropy Whale Optimization Algorithm (MEWOA) is proposed based on fusion for deep feature extraction and perform the classification. In the proposed method, the fine-tuned MobilenetV2 and Nasnet Mobile are applied for simulation. The features are extracted, and optimization is performed. The optimized features are fused and optimized by using MEWOA. Finally, by using the optimized deep features, the machine learning classifiers are applied to classify the breast cancer images. To extract the features and perform the classification, three publicly available datasets are used: INbreast, MIAS, and CBIS-DDSM. The maximum accuracy achieved in INbreast dataset is 99.7%, MIAS dataset has 99.8% and CBIS-DDSM has 93.8%. Finally, a comparison with other existing methods is performed, demonstrating that the proposed algorithm outperforms the other approaches.
Collapse
|
41
|
Kumar Singh K, Kumar S, Antonakakis M, Moirogiorgou K, Deep A, Kashyap KL, Bajpai MK, Zervakis M. Deep Learning Capabilities for the Categorization of Microcalcification. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19042159. [PMID: 35206347 PMCID: PMC8871762 DOI: 10.3390/ijerph19042159] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 02/05/2022] [Accepted: 02/09/2022] [Indexed: 02/06/2023]
Abstract
Breast cancer is the most common cancer in women worldwide. It is the most frequently diagnosed cancer among women in 140 countries out of 184 reporting countries. Lesions of breast cancer are abnormal areas in the breast tissues. Various types of breast cancer lesions include (1) microcalcifications, (2) masses, (3) architectural distortion, and (4) bilateral asymmetry. Microcalcification can be classified as benign, malignant, and benign without a callback. In the present manuscript, we propose an automatic pipeline for the detection of various categories of microcalcification. We performed deep learning using convolution neural networks (CNNs) for the automatic detection and classification of all three categories of microcalcification. CNN was applied using four different optimizers (ADAM, ADAGrad, ADADelta, and RMSProp). The input images of a size of 299 × 299 × 3, with fully connected RELU and SoftMax output activation functions, were utilized in this study. The feature map was obtained using the pretrained InceptionResNetV2 model. The performance evaluation of our classification scheme was tested on a curated breast imaging subset of the DDSM mammogram dataset (CBIS–DDSM), and the results were expressed in terms of sensitivity, specificity, accuracy, and area under the curve (AUC). Our proposed classification scheme outperforms the ability of previously used deep learning approaches and classical machine learning schemes.
Collapse
Affiliation(s)
- Koushlendra Kumar Singh
- Machine Vision and Intelligence Lab, Department of Computer Science and Engineering, National Institute of Technology, Jamshedpur 831014, India; (K.K.S.); (S.K.); (A.D.)
| | - Suraj Kumar
- Machine Vision and Intelligence Lab, Department of Computer Science and Engineering, National Institute of Technology, Jamshedpur 831014, India; (K.K.S.); (S.K.); (A.D.)
| | - Marios Antonakakis
- Digital Image and Signal Processing Laboratory, School of Electrical and Computer Engineering, Technical University of Crete, 73100 Crete, Greece; (K.M.); (M.Z.)
- Correspondence:
| | - Konstantina Moirogiorgou
- Digital Image and Signal Processing Laboratory, School of Electrical and Computer Engineering, Technical University of Crete, 73100 Crete, Greece; (K.M.); (M.Z.)
| | - Anirudh Deep
- Machine Vision and Intelligence Lab, Department of Computer Science and Engineering, National Institute of Technology, Jamshedpur 831014, India; (K.K.S.); (S.K.); (A.D.)
| | - Kanchan Lata Kashyap
- Department of Computer Science and Engineering, Vellore Institute of Technology University, Bhopal 466114, India;
| | - Manish Kumar Bajpai
- Computer Science and Engineering Discipline, PDPM Indian Institute of Information Technology Design Manufacturing, Jabalpur 482005, India;
| | - Michalis Zervakis
- Digital Image and Signal Processing Laboratory, School of Electrical and Computer Engineering, Technical University of Crete, 73100 Crete, Greece; (K.M.); (M.Z.)
| |
Collapse
|
42
|
Deep convolutional neural networks for computer-aided breast cancer diagnostic: a survey. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06804-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
43
|
Shah SM, Khan RA, Arif S, Sajid U. Artificial intelligence for breast cancer analysis: Trends & directions. Comput Biol Med 2022; 142:105221. [PMID: 35016100 DOI: 10.1016/j.compbiomed.2022.105221] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 12/18/2022]
Abstract
Breast cancer is one of the leading causes of death among women. Early detection of breast cancer can significantly improve the lives of millions of women across the globe. Given importance of finding solution/framework for early detection and diagnosis, recently many AI researchers are focusing to automate this task. The other reasons for surge in research activities in this direction are advent of robust AI algorithms (deep learning), availability of hardware that can run/train those robust and complex AI algorithms and accessibility of large enough dataset required for training AI algorithms. Different imaging modalities that have been exploited by researchers to automate the task of breast cancer detection are mammograms, ultrasound, magnetic resonance imaging, histopathological images or any combination of them. This article analyzes these imaging modalities and presents their strengths and limitations. It also enlists resources from where their datasets can be accessed for research purpose. This article then summarizes AI and computer vision based state-of-the-art methods proposed in the last decade to detect breast cancer using various imaging modalities. Primarily, in this article we have focused on reviewing frameworks that have reported results using mammograms as it is the most widely used breast imaging modality that serves as the first test that medical practitioners usually prescribe for the detection of breast cancer. Another reason for focusing on mammogram imaging modalities is the availability of its labelled datasets. Datasets availability is one of the most important aspects for the development of AI based frameworks as such algorithms are data hungry and generally quality of dataset affects performance of AI based algorithms. In a nutshell, this research article will act as a primary resource for the research community working in the field of automated breast imaging analysis.
Collapse
Affiliation(s)
- Shahid Munir Shah
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Rizwan Ahmed Khan
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan.
| | - Sheeraz Arif
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Unaiza Sajid
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| |
Collapse
|
44
|
Li H, Chen D, Nailon WH, Davies ME, Laurenson DI. Dual Convolutional Neural Networks for Breast Mass Segmentation and Diagnosis in Mammography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3-13. [PMID: 34351855 DOI: 10.1109/tmi.2021.3102622] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Deep convolutional neural networks (CNNs) have emerged as a new paradigm for Mammogram diagnosis. Contemporary CNN-based computer-aided-diagnosis systems (CADs) for breast cancer directly extract latent features from input mammogram image and ignore the importance of morphological features. In this paper, we introduce a novel end-to-end deep learning framework for mammogram image processing, which computes mass segmentation and simultaneously predicts diagnosis results. Specifically, our method is constructed in a dual-path architecture that solves the mapping in a dual-problem manner, with an additional consideration of important shape and boundary knowledge. One path, called the Locality Preserving Learner (LPL), is devoted to hierarchically extracting and exploiting intrinsic features of the input. Whereas the other path, called the Conditional Graph Learner (CGL), focuses on generating geometrical features via modeling pixel-wise image to mask correlations. By integrating the two learners, both the cancer semantics and cancer representations are well learned, and the component learning paths in return complement each other, contributing an improvement to the mass segmentation and cancer classification problem at the same time. In addition, by integrating an automatic detection set-up, the DualCoreNet achieves fully automatic breast cancer diagnosis practically. Experimental results show that in benchmark DDSM dataset, DualCoreNet has outperformed other related works in both segmentation and classification tasks, achieving 92.27% DI coefficient and 0.85 AUC score. In another benchmark INbreast dataset, DualCoreNet achieves the best mammography segmentation (93.69% DI coefficient) and competitive classification performance (0.93 AUC score).
Collapse
|
45
|
Muduli D, Dash R, Majhi B. Automated diagnosis of breast cancer using multi-modal datasets: A deep convolution neural network based approach. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.102825] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
46
|
Moitra D, Mandal RK. Classification of malignant tumors by a non-sequential recurrent ensemble of deep neural network model. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:10279-10297. [PMID: 35194379 PMCID: PMC8852869 DOI: 10.1007/s11042-022-12229-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Revised: 01/12/2021] [Accepted: 01/14/2022] [Indexed: 05/04/2023]
Abstract
Many significant efforts have so far been made to classify malignant tumors by using various machine learning methods. Most of the studies have considered a particular tumor genre categorized according to its originating organ. This has enriched the domain-specific knowledge of malignant tumor prediction, we are devoid of an efficient model that may predict the stages of tumors irrespective of their origin. Thus, there is ample opportunity to study if a heterogeneous collection of tumor images can be classified according to their respective stages. The present research work has prepared a heterogeneous tumor dataset comprising eight different datasets from The Cancer Imaging Archives and classified them according to their respective stages, as suggested by the American Joint Committee on Cancer. The proposed model has been used for classifying 717 subjects comprising different imaging modalities and varied Tumor-Node-Metastasis stages. A new non-sequential deep hybrid model ensemble has been developed by exploiting branched and re-injected layers, followed by bidirectional recurrent layers to classify tumor images. Results have been compared with standard sequential deep learning models and notable recent studies. The training and validation accuracy along with the ROC-AUC scores have been found satisfactory over the existing models. No model or method in the literature could ever classify such a diversified mix of tumor images with such high accuracy. The proposed model may help radiologists by acting as an auxiliary decision support system and speed up the tumor diagnosis process.
Collapse
|
47
|
Agarwal R, Yap MH, Hasan MK, Zwiggelaar R, Martí R. Deep Learning in Mammography Breast Cancer Detection. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
48
|
Mridha MF, Hamid MA, Monowar MM, Keya AJ, Ohi AQ, Islam MR, Kim JM. A Comprehensive Survey on Deep-Learning-Based Breast Cancer Diagnosis. Cancers (Basel) 2021; 13:6116. [PMID: 34885225 PMCID: PMC8656730 DOI: 10.3390/cancers13236116] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 11/25/2021] [Accepted: 12/01/2021] [Indexed: 12/11/2022] Open
Abstract
Breast cancer is now the most frequently diagnosed cancer in women, and its percentage is gradually increasing. Optimistically, there is a good chance of recovery from breast cancer if identified and treated at an early stage. Therefore, several researchers have established deep-learning-based automated methods for their efficiency and accuracy in predicting the growth of cancer cells utilizing medical imaging modalities. As of yet, few review studies on breast cancer diagnosis are available that summarize some existing studies. However, these studies were unable to address emerging architectures and modalities in breast cancer diagnosis. This review focuses on the evolving architectures of deep learning for breast cancer detection. In what follows, this survey presents existing deep-learning-based architectures, analyzes the strengths and limitations of the existing studies, examines the used datasets, and reviews image pre-processing techniques. Furthermore, a concrete review of diverse imaging modalities, performance metrics and results, challenges, and research directions for future researchers is presented.
Collapse
Affiliation(s)
- Muhammad Firoz Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Abdul Hamid
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Muhammad Mostafa Monowar
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Ashfia Jannat Keya
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Abu Quwsar Ohi
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh;
| | - Jong-Myon Kim
- Department of Electrical, Electronics, and Computer Engineering, University of Ulsan, Ulsan 680-749, Korea
| |
Collapse
|
49
|
Classification of Breast Cancer in Mammograms with Deep Learning Adding a Fifth Class. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112311398] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Breast cancer is one of the diseases of most profound concern, with the most prevalence worldwide, where early detections and diagnoses play the leading role against this disease achieved through imaging techniques such as mammography. Radiologists tend to have a high false positive rate for mammography diagnoses and an accuracy of around 82%. Currently, deep learning (DL) techniques have shown promising results in the early detection of breast cancer by generating computer-aided diagnosis (CAD) systems implementing convolutional neural networks (CNNs). This work focuses on applying, evaluating, and comparing the architectures: AlexNet, GoogLeNet, Resnet50, and Vgg19 to classify breast lesions after using transfer learning with fine-tuning and training the CNN with regions extracted from the MIAS and INbreast databases. We analyzed 14 classifiers, involving 4 classes as several researches have done it before, corresponding to benign and malignant microcalcifications and masses, and as our main contribution, we also added a 5th class for the normal tissue of the mammary parenchyma increasing the correct detection; in order to evaluate the architectures with a statistical analysis based on the received operational characteristics (ROC), the area under the curve (AUC), F1 Score, accuracy, precision, sensitivity, and specificity. We generate the best results with the CNN GoogLeNet trained with five classes on a balanced database with an AUC of 99.29%, F1 Score of 91.92%, the accuracy of 91.92%, precision of 92.15%, sensitivity of 91.70%, and specificity of 97.66%, concluding that GoogLeNet is optimal as a classifier in a CAD system to deal with breast cancer.
Collapse
|
50
|
Connected-UNets: a deep learning architecture for breast mass segmentation. NPJ Breast Cancer 2021; 7:151. [PMID: 34857755 PMCID: PMC8640011 DOI: 10.1038/s41523-021-00358-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 11/01/2021] [Indexed: 12/19/2022] Open
Abstract
Breast cancer analysis implies that radiologists inspect mammograms to detect suspicious breast lesions and identify mass tumors. Artificial intelligence techniques offer automatic systems for breast mass segmentation to assist radiologists in their diagnosis. With the rapid development of deep learning and its application to medical imaging challenges, UNet and its variations is one of the state-of-the-art models for medical image segmentation that showed promising performance on mammography. In this paper, we propose an architecture, called Connected-UNets, which connects two UNets using additional modified skip connections. We integrate Atrous Spatial Pyramid Pooling (ASPP) in the two standard UNets to emphasize the contextual information within the encoder–decoder network architecture. We also apply the proposed architecture on the Attention UNet (AUNet) and the Residual UNet (ResUNet). We evaluated the proposed architectures on two publically available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Experiments were also conducted using additional synthetic data using the cycle-consistent Generative Adversarial Network (CycleGAN) model between two unpaired datasets to augment and enhance the images. Qualitative and quantitative results show that the proposed architecture can achieve better automatic mass segmentation with a high Dice score of 89.52%, 95.28%, and 95.88% and Intersection over Union (IoU) score of 80.02%, 91.03%, and 92.27%, respectively, on CBIS-DDSM, INbreast, and the private dataset.
Collapse
|