1
|
Yamamuro M, Asai Y, Yamada T, Kimura Y, Ishii K, Kondo Y. Development and validation of the surmising model for volumetric breast density using X-ray exposure conditions in digital mammography. Med Biol Eng Comput 2024:10.1007/s11517-024-03186-w. [PMID: 39218994 DOI: 10.1007/s11517-024-03186-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 08/16/2024] [Indexed: 09/04/2024]
Abstract
The use of breast density as a biomarker for breast cancer treatment has not been well established owing to the difficulty in measuring time-series changes in breast density. In this study, we developed a surmising model for breast density using prior mammograms through a multiple regression analysis, enabling a time series analysis of breast density. We acquired 1320 mediolateral oblique view mammograms to construct the surmising model using multiple regression analysis. The dependent variable was the breast density of the mammary gland region segmented by certified radiological technologists, and independent variables included the compressed breast thickness (CBT), exposure current times exposure second (mAs), tube voltage (kV), and patients' age. The coefficient of determination of the surmising model was 0.868. After applying the model, the correlation coefficients of the three groups based on the CBT (thin group, 18-36 mm; standard group, 38-46 mm; and thick group, 48-78 mm) were 0.913, 0.945, and 0.867, respectively, suggesting that the thick breast group had a significantly low correlation coefficient (p = 0.00231). In conclusion, breast density can be accurately surmised using the CBT, mAs, tube voltage, and patients' age, even in the absence of a mammogram image.
Collapse
Affiliation(s)
- Mika Yamamuro
- Radiology Center, Kindai University Hospital, 377-2, Osaka-Sayama, Osaka, 589-8511, Japan
| | - Yoshiyuki Asai
- Radiology Center, Kindai University Hospital, 377-2, Osaka-Sayama, Osaka, 589-8511, Japan.
| | - Takahiro Yamada
- Division of Positron Emission Tomography Institute of Advanced Clinical Medicine, Kindai University, 377-2, Ono-Higashi, Osaka-Sayama, Osaka, 589-8511, Japan
| | - Yuichi Kimura
- Faculty of Informatics, Kindai University, 3-4-1, Kowakae, Higashi-Osaka, Osaka, 577-8502, Japan
| | - Kazunari Ishii
- Department of Radiology, Faculty of Medicine, Kindai University, 377-2, Osaka-Sayama, Osaka, 589-8511, Japan
| | - Yohan Kondo
- Graduate School of Health Sciences, Niigata University, Asahimachi-Dori, Chuo-Ku, Niigata, 951-8518, Japan
| |
Collapse
|
2
|
Kim E, Lewin AA. Breast Density: Where Are We Now? Radiol Clin North Am 2024; 62:593-605. [PMID: 38777536 DOI: 10.1016/j.rcl.2023.12.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Breast density refers to the amount of fibroglandular tissue relative to fat on mammography and is determined either qualitatively through visual assessment or quantitatively. It is a heritable and dynamic trait associated with age, race/ethnicity, body mass index, and hormonal factors. Increased breast density has important clinical implications including the potential to mask malignancy and as an independent risk factor for the development of breast cancer. Breast density has been incorporated into breast cancer risk models. Given the impact of dense breasts on the interpretation of mammography, supplemental screening may be indicated.
Collapse
Affiliation(s)
- Eric Kim
- Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA
| | - Alana A Lewin
- Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA; New York University Grossman School of Medicine, New York University Langone Health, Laura and Isaac Perlmutter Cancer Center, 160 East 34th Street 3rd Floor, New York, NY 10016, USA.
| |
Collapse
|
3
|
Wang L. Mammography with deep learning for breast cancer detection. Front Oncol 2024; 14:1281922. [PMID: 38410114 PMCID: PMC10894909 DOI: 10.3389/fonc.2024.1281922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/19/2024] [Indexed: 02/28/2024] Open
Abstract
X-ray mammography is currently considered the golden standard method for breast cancer screening, however, it has limitations in terms of sensitivity and specificity. With the rapid advancements in deep learning techniques, it is possible to customize mammography for each patient, providing more accurate information for risk assessment, prognosis, and treatment planning. This paper aims to study the recent achievements of deep learning-based mammography for breast cancer detection and classification. This review paper highlights the potential of deep learning-assisted X-ray mammography in improving the accuracy of breast cancer screening. While the potential benefits are clear, it is essential to address the challenges associated with implementing this technology in clinical settings. Future research should focus on refining deep learning algorithms, ensuring data privacy, improving model interpretability, and establishing generalizability to successfully integrate deep learning-assisted mammography into routine breast cancer screening programs. It is hoped that the research findings will assist investigators, engineers, and clinicians in developing more effective breast imaging tools that provide accurate diagnosis, sensitivity, and specificity for breast cancer.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| |
Collapse
|
4
|
Shankari N, Kudva V, Hegde RB. Breast Mass Detection and Classification Using Machine Learning Approaches on Two-Dimensional Mammogram: A Review. Crit Rev Biomed Eng 2024; 52:41-60. [PMID: 38780105 DOI: 10.1615/critrevbiomedeng.2024051166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Breast cancer is a leading cause of mortality among women, both in India and globally. The prevalence of breast masses is notably common in women aged 20 to 60. These breast masses are classified, according to the breast imaging-reporting and data systems (BI-RADS) standard, into categories such as fibroadenoma, breast cysts, benign, and malignant masses. To aid in the diagnosis of breast disorders, imaging plays a vital role, with mammography being the most widely used modality for detecting breast abnormalities over the years. However, the process of identifying breast diseases through mammograms can be time-consuming, requiring experienced radiologists to review a significant volume of images. Early detection of breast masses is crucial for effective disease management, ultimately reducing mortality rates. To address this challenge, advancements in image processing techniques, specifically utilizing artificial intelligence (AI) and machine learning (ML), have tiled the way for the development of decision support systems. These systems assist radiologists in the accurate identification and classification of breast disorders. This paper presents a review of various studies where diverse machine learning approaches have been applied to digital mammograms. These approaches aim to identify breast masses and classify them into distinct subclasses such as normal, benign and malignant. Additionally, the paper highlights both the advantages and limitations of existing techniques, offering valuable insights for the benefit of future research endeavors in this critical area of medical imaging and breast health.
Collapse
Affiliation(s)
- N Shankari
- NITTE (Deemed to be University), Department of Electronics and Communication Engineering, NMAM Institute of Technology, Nitte 574110, Karnataka, India
| | - Vidya Kudva
- School of Information Sciences, Manipal Academy of Higher Education, Manipal, India -576104; Nitte Mahalinga Adyanthaya Memorial Institute of Technology, Nitte, India - 574110
| | - Roopa B Hegde
- NITTE (Deemed to be University), Department of Electronics and Communication Engineering, NMAM Institute of Technology, Nitte - 574110, Karnataka, India
| |
Collapse
|
5
|
Verboom SD, Caballo M, Peters J, Gommers J, van den Oever D, Broeders MJM, Teuwen J, Sechopoulos I. Deep learning-based breast region segmentation in raw and processed digital mammograms: generalization across views and vendors. J Med Imaging (Bellingham) 2024; 11:014001. [PMID: 38162417 PMCID: PMC10753125 DOI: 10.1117/1.jmi.11.1.014001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 11/25/2023] [Accepted: 12/11/2023] [Indexed: 01/03/2024] Open
Abstract
Purpose We developed a segmentation method suited for both raw (for processing) and processed (for presentation) digital mammograms (DMs) that is designed to generalize across images acquired with systems from different vendors and across the two standard screening views. Approach A U-Net was trained to segment mammograms into background, breast, and pectoral muscle. Eight different datasets, including two previously published public sets and six sets of DMs from as many different vendors, were used, totaling 322 screen film mammograms (SFMs) and 4251 DMs (2821 raw/processed pairs and 1430 only processed) from 1077 different women. Three experiments were done: first training on all SFM and processed images, second also including all raw images in training, and finally testing vendor generalization by leaving one dataset out at a time. Results The model trained on SFM and processed mammograms achieved a good overall performance regardless of projection and vendor, with a mean (±std. dev.) dice score of 0.96 ± 0.06 for all datasets combined. When raw images were included in training, the mean (±std. dev.) dice score for the raw images was 0.95 ± 0.05 and for the processed images was 0.96 ± 0.04 . Testing on a dataset with processed DMs from a vendor that was excluded from training resulted in a difference in mean dice varying between - 0.23 to + 0.02 from that of the fully trained model. Conclusions The proposed segmentation method yields accurate overall segmentation results for both raw and processed mammograms independent of view and vendor. The code and model weights are made available.
Collapse
Affiliation(s)
- Sarah D. Verboom
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands
| | - Marco Caballo
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands
| | - Jim Peters
- Radboud University Medical Center, Department for Health Evidence, Nijmegen, The Netherlands
| | - Jessie Gommers
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands
| | - Daan van den Oever
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands
| | - Mireille J. M. Broeders
- Radboud University Medical Center, Department for Health Evidence, Nijmegen, The Netherlands
- Dutch Expert Centre for Screening (LRCB), Nijmegen, The Netherlands
| | - Jonas Teuwen
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands
- Netherlands Cancer Institute, Department of Radiation Oncology, Amsterdam, The Netherlands
- Memorial Sloan Kettering Cancer Center, Department of Radiology, New York, New York, United States
| | - Ioannis Sechopoulos
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands
- Dutch Expert Centre for Screening (LRCB), Nijmegen, The Netherlands
- University of Twente, Multi-Modality Medical Imaging, Enschede, The Netherlands
| |
Collapse
|
6
|
Best R, Wilkinson LS, Oliver-Williams C, Tolani F, Yates J. Should we share breast density information during breast cancer screening in the United Kingdom? an integrative review. Br J Radiol 2023; 96:20230122. [PMID: 37751169 PMCID: PMC10646652 DOI: 10.1259/bjr.20230122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 07/25/2023] [Accepted: 08/24/2023] [Indexed: 09/27/2023] Open
Abstract
OBJECTIVE Dense breasts are an established risk factor for breast cancer and also reduce the sensitivity of mammograms. There is increasing public concern around breast density in the UK, with calls for this information to be shared at breast cancer screening. METHODS We searched the PubMed database, Cochrane Library and grey literature, using broad search terms in October 2022. Two reviewers extracted data and assessed the risk of bias of each included study. The results were narratively synthesised by five research questions: desire for information, communication formats, psychological impact, knowledge impact and behaviour change. RESULTS We identified 19 studies: three Randomised Controlled Trials (RCTs), three cohort studies, nine cross-sectional studies, one qualitative interview study, one mixed methods study and two 2021 systematic reviews. Nine studies were based in the United States of America (USA), five in Australia, two in the UK and one in Croatia. One systematic review included 14 USA studies, and the other 27 USA studies, 1 Australian and 1 Canadian. The overall GRADE evidence quality rating for each research question was very low to low.Generally, participants wanted to receive breast density information. Conversations with healthcare professionals were more valued and effective than letters. Breast density awareness after notification varied greatly between studies.Breast density information either did not impact frequency of mammography screening or increased the intentions of participants to return for routine screening as well as intention to access, and uptake of, supplementary screening. People from ethnic minority groups or of lower socioeconomic status (SES) had greater confusion following notification, and, along with those without healthcare insurance, were less likely to access supplementary screening. CONCLUSION Breast density specific research in the UK, including different communities, is needed before the UK considers sharing breast density information at screening. There are also practical considerations around implementation and recording, which need to be addressed. ADVANCES IN KNOWLEDGE Currently, sharing breast density information at breast cancer screening in the UK may not be beneficial to participants and could widen inequalities. UK specific research is needed, and measurement, communication and future testing implications need to be carefully considered.
Collapse
Affiliation(s)
- Rebecca Best
- NHS England Screening Quality Assurance Service, Health Education England, England, United Kingdom
| | | | - Clare Oliver-Williams
- NHS England Screening Quality Assurance Service, Health Education England, England, United Kingdom
| | - Foyeke Tolani
- Public Health Department, Bedford Borough Council, Bedford, United Kingdom
| | - Jan Yates
- NHS England Screening Quality Assurance Service, Health Education England, England, United Kingdom
| |
Collapse
|
7
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
8
|
Sexauer R, Hejduk P, Borkowski K, Ruppert C, Weikert T, Dellas S, Schmidt N. Diagnostic accuracy of automated ACR BI-RADS breast density classification using deep convolutional neural networks. Eur Radiol 2023; 33:4589-4596. [PMID: 36856841 PMCID: PMC10289992 DOI: 10.1007/s00330-023-09474-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 01/17/2023] [Accepted: 01/26/2023] [Indexed: 03/02/2023]
Abstract
OBJECTIVES High breast density is a well-known risk factor for breast cancer. This study aimed to develop and adapt two (MLO, CC) deep convolutional neural networks (DCNN) for automatic breast density classification on synthetic 2D tomosynthesis reconstructions. METHODS In total, 4605 synthetic 2D images (1665 patients, age: 57 ± 37 years) were labeled according to the ACR (American College of Radiology) density (A-D). Two DCNNs with 11 convolutional layers and 3 fully connected layers each, were trained with 70% of the data, whereas 20% was used for validation. The remaining 10% were used as a separate test dataset with 460 images (380 patients). All mammograms in the test dataset were read blinded by two radiologists (reader 1 with two and reader 2 with 11 years of dedicated mammographic experience in breast imaging), and the consensus was formed as the reference standard. The inter- and intra-reader reliabilities were assessed by calculating Cohen's kappa coefficients, and diagnostic accuracy measures of automated classification were evaluated. RESULTS The two models for MLO and CC projections had a mean sensitivity of 80.4% (95%-CI 72.2-86.9), a specificity of 89.3% (95%-CI 85.4-92.3), and an accuracy of 89.6% (95%-CI 88.1-90.9) in the differentiation between ACR A/B and ACR C/D. DCNN versus human and inter-reader agreement were both "substantial" (Cohen's kappa: 0.61 versus 0.63). CONCLUSION The DCNN allows accurate, standardized, and observer-independent classification of breast density based on the ACR BI-RADS system. KEY POINTS • A DCNN performs on par with human experts in breast density assessment for synthetic 2D tomosynthesis reconstructions. • The proposed technique may be useful for accurate, standardized, and observer-independent breast density evaluation of tomosynthesis.
Collapse
Affiliation(s)
- Raphael Sexauer
- Department of Radiology and Nuclear Medicine, University Hospital Basel, Petersgraben 4, CH-4031, Basel, Switzerland.
| | - Patryk Hejduk
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
| | - Karol Borkowski
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
| | - Carlotta Ruppert
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistrasse 100, CH-8091, Zurich, Switzerland
| | - Thomas Weikert
- Department of Radiology and Nuclear Medicine, University Hospital Basel, Petersgraben 4, CH-4031, Basel, Switzerland
| | - Sophie Dellas
- Department of Radiology and Nuclear Medicine, University Hospital Basel, Petersgraben 4, CH-4031, Basel, Switzerland
| | - Noemi Schmidt
- Department of Radiology and Nuclear Medicine, University Hospital Basel, Petersgraben 4, CH-4031, Basel, Switzerland
| |
Collapse
|
9
|
Classification of Breast Lesions on DCE-MRI Data Using a Fine-Tuned MobileNet. Diagnostics (Basel) 2023; 13:diagnostics13061067. [PMID: 36980377 PMCID: PMC10047403 DOI: 10.3390/diagnostics13061067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/06/2023] [Accepted: 03/07/2023] [Indexed: 03/14/2023] Open
Abstract
It is crucial to diagnose breast cancer early and accurately to optimize treatment. Presently, most deep learning models used for breast cancer detection cannot be used on mobile phones or low-power devices. This study intended to evaluate the capabilities of MobileNetV1 and MobileNetV2 and their fine-tuned models to differentiate malignant lesions from benign lesions in breast dynamic contrast-enhanced magnetic resonance images (DCE-MRI).
Collapse
|
10
|
Das HS, Das A, Neog A, Mallik S, Bora K, Zhao Z. Breast cancer detection: Shallow convolutional neural network against deep convolutional neural networks based approach. Front Genet 2023; 13:1097207. [PMID: 36685963 PMCID: PMC9846574 DOI: 10.3389/fgene.2022.1097207] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 12/15/2022] [Indexed: 01/06/2023] Open
Abstract
Introduction: Of all the cancers that afflict women, breast cancer (BC) has the second-highest mortality rate, and it is also believed to be the primary cause of the high death rate. Breast cancer is the most common cancer that affects women globally. There are two types of breast tumors: benign (less harmful and unlikely to become breast cancer) and malignant (which are very dangerous and might result in aberrant cells that could result in cancer). Methods: To find breast abnormalities like masses and micro-calcifications, competent and educated radiologists often examine mammographic images. This study focuses on computer-aided diagnosis to help radiologists make more precise diagnoses of breast cancer. This study aims to compare and examine the performance of the proposed shallow convolutional neural network architecture having different specifications against pre-trained deep convolutional neural network architectures trained on mammography images. Mammogram images are pre-processed in this study's initial attempt to carry out the automatic identification of BC. Thereafter, three different types of shallow convolutional neural networks with representational differences are then fed with the resulting data. In the second method, transfer learning via fine-tuning is used to feed the same collection of images into pre-trained convolutional neural networks VGG19, ResNet50, MobileNet-v2, Inception-v3, Xception, and Inception-ResNet-v2. Results: In our experiment with two datasets, the accuracy for the CBIS-DDSM and INbreast datasets are 80.4%, 89.2%, and 87.8%, 95.1% respectively. Discussion: It can be concluded from the experimental findings that the deep network-based approach with precise tuning outperforms all other state-of-the-art techniques in experiments on both datasets.
Collapse
Affiliation(s)
- Himanish Shekhar Das
- Department of Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Akalpita Das
- Department of Computer Science and Engineering, GIMT Guwahati, Guwahati, India
| | - Anupal Neog
- Department of AI and Machine Learning COE, IQVIA, Bengaluru, Karnataka, India
| | - Saurav Mallik
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, United States
- Department of Pharmacology and Toxicology, University of Arizona, Tucson, AZ, United States
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Pathology and Laboratory Medicine, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
11
|
Ali KAS, Fateh SM. Mammographic breast density status in women aged more than 40 years in Sulaimaniyah, Iraq: a cross-sectional study. J Int Med Res 2022; 50:3000605221139712. [PMID: 36453636 PMCID: PMC9720814 DOI: 10.1177/03000605221139712] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 11/01/2022] [Indexed: 09/05/2024] Open
Abstract
OBJECTIVE Mammography is the gold standard screening procedure for the early diagnosis of breast cancer. This study aimed to determine the distribution of breast density among women older than 40 years in Sulaimaniyah, Iraq, and to examine the correlations between breast density and various risk factors. METHODS This cross-sectional study included 750 women who received routine mammographic breast screening at Sulaimaniyah Breast Center. Bilateral standard two-view mammographic images (craniocaudal and mediolateral oblique projections) were acquired and reported using a picture archiving and communication system. American College of Radiology (ACR) Breast Imaging-Reporting and Data System (BI-RADS) assessment categories C and D were considered as dense. RESULTS A total of 54.3% of breasts were classified as dense, with ACR-BI-RADS categories C or D. Breast density was significantly associated with age, body mass index, a family history of breast cancer, and pre-menopause, and women with no history of breastfeeding were more likely to have dense breasts than those with partial or complete breastfeeding. CONCLUSIONS This study revealed that women from Sulaimaniyah with a distinct breast-density profile at mammographic screening may have a significantly increased risk of breast cancer.
Collapse
Affiliation(s)
- Kalthum Abdullah Sofi Ali
- Department of Radiology/Surgery, College of
Medicine, University of Sulaimani, 46001 Sulaimaniyah, Iraq
| | - Salah Muhammed Fateh
- Department of Radiology/Surgery, College of
Medicine, University of Sulaimani, 46001 Sulaimaniyah, Iraq
| |
Collapse
|
12
|
Madani M, Behzadi MM, Nabavi S. The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review. Cancers (Basel) 2022; 14:5334. [PMID: 36358753 PMCID: PMC9655692 DOI: 10.3390/cancers14215334] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 12/02/2022] Open
Abstract
Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, in 2020, more than 65% of breast cancer patients were diagnosed in an early stage of cancer, from which all survived. Although early detection is the most effective approach for cancer treatment, breast cancer screening conducted by radiologists is very expensive and time-consuming. More importantly, conventional methods of analyzing breast cancer images suffer from high false-detection rates. Different breast cancer imaging modalities are used to extract and analyze the key features affecting the diagnosis and treatment of breast cancer. These imaging modalities can be divided into subgroups such as mammograms, ultrasound, magnetic resonance imaging, histopathological images, or any combination of them. Radiologists or pathologists analyze images produced by these methods manually, which leads to an increase in the risk of wrong decisions for cancer detection. Thus, the utilization of new automatic methods to analyze all kinds of breast screening images to assist radiologists to interpret images is required. Recently, artificial intelligence (AI) has been widely utilized to automatically improve the early detection and treatment of different types of cancer, specifically breast cancer, thereby enhancing the survival chance of patients. Advances in AI algorithms, such as deep learning, and the availability of datasets obtained from various imaging modalities have opened an opportunity to surpass the limitations of current breast cancer analysis methods. In this article, we first review breast cancer imaging modalities, and their strengths and limitations. Then, we explore and summarize the most recent studies that employed AI in breast cancer detection using various breast imaging modalities. In addition, we report available datasets on the breast-cancer imaging modalities which are important in developing AI-based algorithms and training deep learning models. In conclusion, this review paper tries to provide a comprehensive resource to help researchers working in breast cancer imaging analysis.
Collapse
Affiliation(s)
- Mohammad Madani
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Mohammad Mahdi Behzadi
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|
13
|
Chalfant JS, Hoyt AC. Breast Density: Current Knowledge, Assessment Methods, and Clinical Implications. JOURNAL OF BREAST IMAGING 2022; 4:357-370. [PMID: 38416979 DOI: 10.1093/jbi/wbac028] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Indexed: 03/01/2024]
Abstract
Breast density is an accepted independent risk factor for the future development of breast cancer, and greater breast density has the potential to mask malignancies on mammography, thus lowering the sensitivity of screening mammography. The risk associated with dense breast tissue has been shown to be modifiable with changes in breast density. Numerous studies have sought to identify factors that influence breast density, including age, genetic, racial/ethnic, prepubertal, adolescent, lifestyle, environmental, hormonal, and reproductive history factors. Qualitative, semiquantitative, and quantitative methods of breast density assessment have been developed, but to date there is no consensus assessment method or reference standard for breast density. Breast density has been incorporated into breast cancer risk models, and there is growing consciousness of the clinical implications of dense breast tissue in both the medical community and public arena. Efforts to improve breast cancer screening sensitivity for women with dense breasts have led to increased attention to supplemental screening methods in recent years, prompting the American College of Radiology to publish Appropriateness Criteria for supplemental screening based on breast density.
Collapse
Affiliation(s)
- James S Chalfant
- David Geffen School of Medicine at University of California, Los Angeles, Department of Radiological Sciences, Santa Monica, CA, USA
| | - Anne C Hoyt
- David Geffen School of Medicine at University of California, Los Angeles, Department of Radiological Sciences, Santa Monica, CA, USA
| |
Collapse
|
14
|
Larroza A, Pérez-Benito FJ, Perez-Cortes JC, Román M, Pollán M, Pérez-Gómez B, Salas-Trejo D, Casals M, Llobet R. Breast Dense Tissue Segmentation with Noisy Labels: A Hybrid Threshold-Based and Mask-Based Approach. Diagnostics (Basel) 2022; 12:diagnostics12081822. [PMID: 36010173 PMCID: PMC9406546 DOI: 10.3390/diagnostics12081822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 07/18/2022] [Accepted: 07/25/2022] [Indexed: 11/30/2022] Open
Abstract
Breast density assessed from digital mammograms is a known biomarker related to a higher risk of developing breast cancer. Supervised learning algorithms have been implemented to determine this. However, the performance of these algorithms depends on the quality of the ground-truth information, which expert readers usually provide. These expert labels are noisy approximations to the ground truth, as there is both intra- and inter-observer variability among them. Thus, it is crucial to provide a reliable method to measure breast density from mammograms. This paper presents a fully automated method based on deep learning to estimate breast density, including breast detection, pectoral muscle exclusion, and dense tissue segmentation. We propose a novel confusion matrix (CM)—YNet model for the segmentation step. This architecture includes networks to model each radiologist’s noisy label and gives the estimated ground-truth segmentation as well as two parameters that allow interaction with a threshold-based labeling tool. A multi-center study involving 1785 women whose “for presentation” mammograms were obtained from 11 different medical facilities was performed. A total of 2496 mammograms were used as the training corpus, and 844 formed the testing corpus. Additionally, we included a totally independent dataset from a different center, composed of 381 women with one image per patient. Each mammogram was labeled independently by two expert radiologists using a threshold-based tool. The implemented CM-Ynet model achieved the highest DICE score averaged over both test datasets (0.82±0.14) when compared to the closest dense-tissue segmentation assessment from both radiologists. The level of concordance between the two radiologists showed a DICE score of 0.76±0.17. An automatic breast density estimator based on deep learning exhibited higher performance when compared with two experienced radiologists. This suggests that modeling each radiologist’s label allows for better estimation of the unknown ground-truth segmentation. The advantage of the proposed model is that it also provides the threshold parameters that enable user interaction with a threshold-based tool.
Collapse
Affiliation(s)
- Andrés Larroza
- Instituto Tecnológico de la Informática, Universitat Politècnica de València, Camino de Vera, 46022 València, Spain; (F.J.P.-B.); (J.-C.P.-C.); (R.L.)
- Correspondence:
| | - Francisco Javier Pérez-Benito
- Instituto Tecnológico de la Informática, Universitat Politècnica de València, Camino de Vera, 46022 València, Spain; (F.J.P.-B.); (J.-C.P.-C.); (R.L.)
| | - Juan-Carlos Perez-Cortes
- Instituto Tecnológico de la Informática, Universitat Politècnica de València, Camino de Vera, 46022 València, Spain; (F.J.P.-B.); (J.-C.P.-C.); (R.L.)
| | - Marta Román
- Department of Epidemiology and Evaluation, IMIM (Hospital del Mar Medical Research Institute), Passeig Marítim 25–29, 08003 Barcelona, Spain;
| | - Marina Pollán
- National Center for Epidemiology, Carlos III Institute of Health, Monforte de Lemos, 5, 28029 Madrid, Spain; (M.P.); (B.P.-G.)
- Consortium for Biomedical Research in Epidemiology and Public Health (CIBER en Epidemiología y Salud Pública—CIBERESP), Carlos III Institute of Health, Monforte de Lemos, 5, 28029 Madrid, Spain
| | - Beatriz Pérez-Gómez
- National Center for Epidemiology, Carlos III Institute of Health, Monforte de Lemos, 5, 28029 Madrid, Spain; (M.P.); (B.P.-G.)
- Consortium for Biomedical Research in Epidemiology and Public Health (CIBER en Epidemiología y Salud Pública—CIBERESP), Carlos III Institute of Health, Monforte de Lemos, 5, 28029 Madrid, Spain
| | - Dolores Salas-Trejo
- Valencian Breast Cancer Screening Program, General Directorate of Public Health, 46022 València, Spain; (D.S.-T.); (M.C.)
- Centro Superior de Investigación en Salud Pública, CSISP, FISABIO, 46020 València, Spain
| | - María Casals
- Valencian Breast Cancer Screening Program, General Directorate of Public Health, 46022 València, Spain; (D.S.-T.); (M.C.)
- Centro Superior de Investigación en Salud Pública, CSISP, FISABIO, 46020 València, Spain
| | - Rafael Llobet
- Instituto Tecnológico de la Informática, Universitat Politècnica de València, Camino de Vera, 46022 València, Spain; (F.J.P.-B.); (J.-C.P.-C.); (R.L.)
| |
Collapse
|
15
|
AlEisa HN, Touiti W, Ali ALHussan A, Ben Aoun N, Ejbali R, Zaied M, Saadia A. Breast Cancer Classification Using FCN and Beta Wavelet Autoencoder. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8044887. [PMID: 35785059 PMCID: PMC9246636 DOI: 10.1155/2022/8044887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 06/04/2022] [Indexed: 11/17/2022]
Abstract
In this paper, a new classification approach of breast cancer based on Fully Convolutional Networks (FCNs) and Beta Wavelet Autoencoder (BWAE) is presented. FCN, as a powerful image segmentation model, is used to extract the relevant information from mammography images. It will identify the relevant zones to model while WAE is used to model the extracted information for these zones. In fact, WAE has proven its superiority to the majority of the features extraction approaches. The fusion of these two techniques have improved the feature extraction phase and this by keeping and modeling only the relevant and useful features for the identification and description of breast masses. The experimental results showed the effectiveness of our proposed method which has given very encouraging results in comparison with the states of the art approaches on the same mammographic image base. A precision rate of 94% for benign and 93% for malignant was achieved with a recall rate of 92% for benign and 95% for malignant. For the normal case, we were able to reach a rate of 100%.
Collapse
Affiliation(s)
- Hussah Nasser AlEisa
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Wajdi Touiti
- Research Team in Intelligent Machines, National School of Engineers of Gabes, B. P. W 6072, Gabes, Tunisia
| | - Amel Ali ALHussan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Najib Ben Aoun
- College of Computer Science and Information Technology, Al Baha University, Al Baha, Saudi Arabia
- REGIM-Lab, Research Groups in Intelligent Machines, National School of Engineers of Sfax (ENIS), University of Sfax, Sfax, Tunisia
| | - Ridha Ejbali
- Research Team in Intelligent Machines, National School of Engineers of Gabes, B. P. W 6072, Gabes, Tunisia
| | - Mourad Zaied
- Research Team in Intelligent Machines, National School of Engineers of Gabes, B. P. W 6072, Gabes, Tunisia
| | - Ayesha Saadia
- Department of Computer Science, Faculty of Computing and Artificial Intelligence, Air University, PAF Complex, Islamabad, Pakistan
| |
Collapse
|
16
|
Shape-Based Breast Lesion Classification Using Digital Tomosynthesis Images: The Role of Explainable Artificial Intelligence. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12126230] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Computer-aided diagnosis (CAD) systems can help radiologists in numerous medical tasks including classification and staging of the various diseases. The 3D tomosynthesis imaging technique adds value to the CAD systems in diagnosis and classification of the breast lesions. Several convolutional neural network (CNN) architectures have been proposed to classify the lesion shapes to the respective classes using a similar imaging method. However, not only is the black box nature of these CNN models questionable in the healthcare domain, but so is the morphological-based cancer classification, concerning the clinicians. As a result, this study proposes both a mathematically and visually explainable deep-learning-driven multiclass shape-based classification framework for the tomosynthesis breast lesion images. In this study, authors exploit eight pretrained CNN architectures for the classification task on the previously extracted regions of interests images containing the lesions. Additionally, the study also unleashes the black box nature of the deep learning models using two well-known perceptive explainable artificial intelligence (XAI) algorithms including Grad-CAM and LIME. Moreover, two mathematical-structure-based interpretability techniques, i.e., t-SNE and UMAP, are employed to investigate the pretrained models’ behavior towards multiclass feature clustering. The experimental results of the classification task validate the applicability of the proposed framework by yielding the mean area under the curve of 98.2%. The explanability study validates the applicability of all employed methods, mainly emphasizing the pros and cons of both Grad-CAM and LIME methods that can provide useful insights towards explainable CAD systems.
Collapse
|
17
|
Lopez-Almazan H, Javier Pérez-Benito F, Larroza A, Perez-Cortes JC, Pollan M, Perez-Gomez B, Salas Trejo D, Casals M, Llobet R. A deep learning framework to classify breast density with noisy labels regularization. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106885. [PMID: 35594581 DOI: 10.1016/j.cmpb.2022.106885] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 04/12/2022] [Accepted: 05/10/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Breast density assessed from digital mammograms is a biomarker for higher risk of developing breast cancer. Experienced radiologists assess breast density using the Breast Image and Data System (BI-RADS) categories. Supervised learning algorithms have been developed with this objective in mind, however, the performance of these algorithms depends on the quality of the ground-truth information which is usually labeled by expert readers. These labels are noisy approximations of the ground truth, as there is often intra- and inter-reader variability among labels. Thus, it is crucial to provide a reliable method to obtain digital mammograms matching BI-RADS categories. This paper presents RegL (Labels Regularizer), a methodology that includes different image pre-processes to allow both a correct breast segmentation and the enhancement of image quality through an intensity adjustment, thus allowing the use of deep learning to classify the mammograms into BI-RADS categories. The Confusion Matrix (CM) - CNN network used implements an architecture that models each radiologist's noisy label. The final methodology pipeline was determined after comparing the performance of image pre-processes combined with different DL architectures. METHODS A multi-center study composed of 1395 women whose mammograms were classified into the four BI-RADS categories by three experienced radiologists is presented. A total of 892 mammograms were used as the training corpus, 224 formed the validation corpus, and 279 the test corpus. RESULTS The combination of five networks implementing the RegL methodology achieved the best results among all the models in the test set. The ensemble model obtained an accuracy of (0.85) and a kappa index of 0.71. CONCLUSIONS The proposed methodology has a similar performance to the experienced radiologists in the classification of digital mammograms into BI-RADS categories. This suggests that the pre-processing steps and modelling of each radiologist's label allows for a better estimation of the unknown ground truth labels.
Collapse
Affiliation(s)
- Hector Lopez-Almazan
- Instituto Tecnológico de la Informática, Universitat Politècnica de València,Camino de Vera, s/n, 46022 València, Spain.
| | - Francisco Javier Pérez-Benito
- Instituto Tecnológico de la Informática, Universitat Politècnica de València,Camino de Vera, s/n, 46022 València, Spain.
| | - Andrés Larroza
- Instituto Tecnológico de la Informática, Universitat Politècnica de València,Camino de Vera, s/n, 46022 València, Spain.
| | - Juan-Carlos Perez-Cortes
- Instituto Tecnológico de la Informática, Universitat Politècnica de València,Camino de Vera, s/n, 46022 València, Spain.
| | - Marina Pollan
- National Center for Epidemiology, Carlos III Institute of Health, Monforte de lemos, 5, 28029 Madrid, Spain; Consortium for Biomedical Research in Epidemiology and Public Health (CIBER en Epidemiología y Salud Pública - CIBERESP), Carlos III Institute of Health, Monforte de Lemos 5, 28029 Madrid, Spain.
| | - Beatriz Perez-Gomez
- National Center for Epidemiology, Carlos III Institute of Health, Monforte de lemos, 5, 28029 Madrid, Spain; Consortium for Biomedical Research in Epidemiology and Public Health (CIBER en Epidemiología y Salud Pública - CIBERESP), Carlos III Institute of Health, Monforte de Lemos 5, 28029 Madrid, Spain.
| | - Dolores Salas Trejo
- Valencian Breast Cancer Screening Program, General Directorate of Public Health, València, Spain; Centro Superior de Investigación en Salud Pública CSISP, FISABIO, València, Spain.
| | - María Casals
- Valencian Breast Cancer Screening Program, General Directorate of Public Health, València, Spain; Centro Superior de Investigación en Salud Pública CSISP, FISABIO, València, Spain.
| | - Rafael Llobet
- Instituto Tecnológico de la Informática, Universitat Politècnica de València,Camino de Vera, s/n, 46022 València, Spain.
| |
Collapse
|
18
|
Classifying Breast Density from Mammogram with Pretrained CNNs and Weighted Average Ensembles. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
We are currently experiencing a revolution in data production and artificial intelligence (AI) applications. Data are produced much faster than they can be consumed. Thus, there is an urgent need to develop AI algorithms for all aspects of modern life. Furthermore, the medical field is a fertile field in which to apply AI techniques. Breast cancer is one of the most common cancers and a leading cause of death around the world. Early detection is critical to treating the disease effectively. Breast density plays a significant role in determining the likelihood and risk of breast cancer. Breast density describes the amount of fibrous and glandular tissue compared with the amount of fatty tissue in the breast. Breast density is categorized using a system called the ACR BI-RADS. The ACR assigns breast density to one of four classes. In class A, breasts are almost entirely fatty. In class B, scattered areas of fibroglandular density appear in the breasts. In class C, the breasts are heterogeneously dense. In class D, the breasts are extremely dense. This paper applies pre-trained Convolutional Neural Network (CNN) on a local mammogram dataset to classify breast density. Several transfer learning models were tested on a dataset consisting of more than 800 mammogram screenings from King Abdulaziz Medical City (KAMC). Inception V3, EfficientNet 2B0, and Xception gave the highest accuracy for both four- and two-class classification. To enhance the accuracy of density classification, we applied weighted average ensembles, and performance was visibly improved. The overall accuracy of ACR classification with weighted average ensembles was 78.11%.
Collapse
|
19
|
Gastounioti A, Desai S, Ahluwalia VS, Conant EF, Kontos D. Artificial intelligence in mammographic phenotyping of breast cancer risk: a narrative review. Breast Cancer Res 2022; 24:14. [PMID: 35184757 PMCID: PMC8859891 DOI: 10.1186/s13058-022-01509-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 02/08/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Improved breast cancer risk assessment models are needed to enable personalized screening strategies that achieve better harm-to-benefit ratio based on earlier detection and better breast cancer outcomes than existing screening guidelines. Computational mammographic phenotypes have demonstrated a promising role in breast cancer risk prediction. With the recent exponential growth of computational efficiency, the artificial intelligence (AI) revolution, driven by the introduction of deep learning, has expanded the utility of imaging in predictive models. Consequently, AI-based imaging-derived data has led to some of the most promising tools for precision breast cancer screening. MAIN BODY This review aims to synthesize the current state-of-the-art applications of AI in mammographic phenotyping of breast cancer risk. We discuss the fundamentals of AI and explore the computing advancements that have made AI-based image analysis essential in refining breast cancer risk assessment. Specifically, we discuss the use of data derived from digital mammography as well as digital breast tomosynthesis. Different aspects of breast cancer risk assessment are targeted including (a) robust and reproducible evaluations of breast density, a well-established breast cancer risk factor, (b) assessment of a woman's inherent breast cancer risk, and (c) identification of women who are likely to be diagnosed with breast cancers after a negative or routine screen due to masking or the rapid and aggressive growth of a tumor. Lastly, we discuss AI challenges unique to the computational analysis of mammographic imaging as well as future directions for this promising research field. CONCLUSIONS We provide a useful reference for AI researchers investigating image-based breast cancer risk assessment while indicating key priorities and challenges that, if properly addressed, could accelerate the implementation of AI-assisted risk stratification to future refine and individualize breast cancer screening strategies.
Collapse
Affiliation(s)
- Aimilia Gastounioti
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.,Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Shyam Desai
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Vinayak S Ahluwalia
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.,Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Emily F Conant
- Department of Radiology, Hospital of the University of Pennsylvania, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Despina Kontos
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
20
|
Landsmann A, Wieler J, Hejduk P, Ciritsis A, Borkowski K, Rossi C, Boss A. Applied Machine Learning in Spiral Breast-CT: Can We Train a Deep Convolutional Neural Network for Automatic, Standardized and Observer Independent Classification of Breast Density? Diagnostics (Basel) 2022; 12:diagnostics12010181. [PMID: 35054348 PMCID: PMC8775263 DOI: 10.3390/diagnostics12010181] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 01/05/2022] [Accepted: 01/11/2022] [Indexed: 02/05/2023] Open
Abstract
The aim of this study was to investigate the potential of a machine learning algorithm to accurately classify parenchymal density in spiral breast-CT (BCT), using a deep convolutional neural network (dCNN). In this retrospectively designed study, 634 examinations of 317 patients were included. After image selection and preparation, 5589 images from 634 different BCT examinations were sorted by a four-level density scale, ranging from A to D, using ACR BI-RADS-like criteria. Subsequently four different dCNN models (differences in optimizer and spatial resolution) were trained (70% of data), validated (20%) and tested on a “real-world” dataset (10%). Moreover, dCNN accuracy was compared to a human readout. The overall performance of the model with lowest resolution of input data was highest, reaching an accuracy on the “real-world” dataset of 85.8%. The intra-class correlation of the dCNN and the two readers was almost perfect (0.92) and kappa values between both readers and the dCNN were substantial (0.71–0.76). Moreover, the diagnostic performance between the readers and the dCNN showed very good correspondence with an AUC of 0.89. Artificial Intelligence in the form of a dCNN can be used for standardized, observer-independent and reliable classification of parenchymal density in a BCT examination.
Collapse
|
21
|
Convolutional Neural Networks for Breast Density Classification: Performance and Explanation Insights. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app12010148] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
We propose and evaluate a procedure for the explainability of a breast density deep learning based classifier. A total of 1662 mammography exams labeled according to the BI-RADS categories of breast density was used. We built a residual Convolutional Neural Network, trained it and studied the responses of the model to input changes, such as different distributions of class labels in training and test sets and suitable image pre-processing. The aim was to identify the steps of the analysis with a relevant impact on the classifier performance and on the model explainability. We used the grad-CAM algorithm for CNN to produce saliency maps and computed the Spearman’s rank correlation between input images and saliency maps as a measure of explanation accuracy. We found that pre-processing is critical not only for accuracy, precision and recall of a model but also to have a reasonable explanation of the model itself. Our CNN reaches good performances compared to the state-of-art and it considers the dense pattern to make the classification. Saliency maps strongly correlate with the dense pattern. This work is a starting point towards the implementation of a standard framework to evaluate both CNN performances and the explainability of their predictions in medical image classification problems.
Collapse
|
22
|
Ryan F, Román KLL, Gerbolés BZ, Rebescher KM, Txurio MS, Ugarte RC, González MJG, Oliver IM. Unsupervised domain adaptation for the segmentation of breast tissue in mammography images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 211:106368. [PMID: 34537490 DOI: 10.1016/j.cmpb.2021.106368] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 08/17/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Breast density refers to the proportion of glandular and fatty tissue in the breast and is recognized as a useful factor assessing breast cancer risk. Moreover, the segmentation of the high-density glandular tissue from mammograms can assist medical professionals visualizing and localizing areas that may require additional attention. Developing robust methods to segment breast tissues is challenging due to the variations in mammographic acquisition systems and protocols. Deep learning methods are effective in medical image segmentation but they often require large quantities of labelled data. Unsupervised domain adaptation is an area of research that employs unlabelled data to improve model performance on variations of samples derived from different sources. METHODS First, a U-Net architecture was used to perform segmentation of the fatty and glandular tissues with labelled data from a single acquisition device. Then, adversarial-based unsupervised domain adaptation methods were used to incorporate single unlabelled target domains, consisting of images from a different machine, into the training. Finally, the domain adaptation model was extended to include multiple unlabelled target domains by combining a reconstruction task with adversarial training. RESULTS The adversarial training was found to improve the generalization of the initial model on new domain data, demonstrating clearly improved segmentation of the breast tissues. For training with multiple unlabelled domains, combining a reconstruction task with adversarial training improved the stability of the training and yielded adequate segmentation results across all domains with a single model. CONCLUSIONS Results demonstrated the potential for adversarial-based domain adaptation with U-Net architectures for segmentation of breast tissue in mammograms coming from several devices and demonstrated that domain-adapted models could achieve a similar agreement with manual segmentations. It has also been found that combining adversarial and reconstruction-based methods can provide a simple and effective solution for training with multiple unlabelled target domains.
Collapse
|
23
|
Yu X, Zhou Q, Wang S, Zhang Y. A systematic survey of deep learning in breast cancer. INT J INTELL SYST 2021. [DOI: 10.1002/int.22622] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Affiliation(s)
- Xiang Yu
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| | - Qinghua Zhou
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| | - Shuihua Wang
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| | - Yu‐Dong Zhang
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| |
Collapse
|
24
|
Breast Cancer Segmentation Methods: Current Status and Future Potentials. BIOMED RESEARCH INTERNATIONAL 2021; 2021:9962109. [PMID: 34337066 PMCID: PMC8321730 DOI: 10.1155/2021/9962109] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 05/14/2021] [Accepted: 06/11/2021] [Indexed: 12/24/2022]
Abstract
Early breast cancer detection is one of the most important issues that need to be addressed worldwide as it can help increase the survival rate of patients. Mammograms have been used to detect breast cancer in the early stages; if detected in the early stages, it can drastically reduce treatment costs. The detection of tumours in the breast depends on segmentation techniques. Segmentation plays a significant role in image analysis and includes detection, feature extraction, classification, and treatment. Segmentation helps physicians quantify the volume of tissue in the breast for treatment planning. In this work, we have grouped segmentation methods into three groups: classical segmentation that includes region-, threshold-, and edge-based segmentation; machine learning segmentation; and supervised and unsupervised and deep learning segmentation. The findings of our study revealed that region-based segmentation is frequently used for classical methods, and the most frequently used techniques are region growing. Further, a median filter is a robust tool for removing noise. Moreover, the MIAS database is frequently used in classical segmentation methods. Meanwhile, in machine learning segmentation, unsupervised machine learning methods are more frequently used, and U-Net is frequently used for mammogram image segmentation because it does not require many annotated images compared with other deep learning models. Furthermore, reviewed papers revealed that it is possible to train a deep learning model without performing any preprocessing or postprocessing and also showed that the U-Net model is frequently used for mammogram segmentation. The U-Net model is frequently used because it does not require many annotated images and also because of the presence of high-performance GPU computing, which makes it easy to train networks with more layers. Additionally, we identified mammograms and utilised widely used databases, wherein 3 and 28 are public and private databases, respectively.
Collapse
|
25
|
A Review of Explainable Deep Learning Cancer Detection Models in Medical Imaging. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11104573] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Deep learning has demonstrated remarkable accuracy analyzing images for cancer detection tasks in recent years. The accuracy that has been achieved rivals radiologists and is suitable for implementation as a clinical tool. However, a significant problem is that these models are black-box algorithms therefore they are intrinsically unexplainable. This creates a barrier for clinical implementation due to lack of trust and transparency that is a characteristic of black box algorithms. Additionally, recent regulations prevent the implementation of unexplainable models in clinical settings which further demonstrates a need for explainability. To mitigate these concerns, there have been recent studies that attempt to overcome these issues by modifying deep learning architectures or providing after-the-fact explanations. A review of the deep learning explanation literature focused on cancer detection using MR images is presented here. The gap between what clinicians deem explainable and what current methods provide is discussed and future suggestions to close this gap are provided.
Collapse
|
26
|
Laoveeravat P, Abhyankar PR, Brenner AR, Gabr MM, Habr FG, Atsawarungruangkit A. Artificial intelligence for pancreatic cancer detection: Recent development and future direction. Artif Intell Gastroenterol 2021; 2:56-68. [DOI: 10.35712/aig.v2.i2.56] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 03/31/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) has been increasingly utilized in medical applications, especially in the field of gastroenterology. AI can assist gastroenterologists in imaging-based testing and prediction of clinical diagnosis, for examples, detecting polyps during colonoscopy, identifying small bowel lesions using capsule endoscopy images, and predicting liver diseases based on clinical parameters. With its high mortality rate, pancreatic cancer can highly benefit from AI since the early detection of small lesion is difficult with conventional imaging techniques and current biomarkers. Endoscopic ultrasound (EUS) is a main diagnostic tool with high sensitivity for pancreatic adenocarcinoma and pancreatic cystic lesion. The standard tumor markers have not been effective for diagnosis. There have been recent research studies in AI application in EUS and novel biomarkers to early detect and differentiate malignant pancreatic lesions. The findings are impressive compared to the available traditional methods. Herein, we aim to explore the utility of AI in EUS and novel serum and cyst fluid biomarkers for pancreatic cancer detection.
Collapse
Affiliation(s)
- Passisd Laoveeravat
- Division of Digestive Diseases and Nutrition, University of Kentucky College of Medicine, Lexington, KY 40536, United States
| | - Priya R Abhyankar
- Department of Internal Medicine, University of Kentucky College of Medicine, Lexington, KY 40536, United States
| | - Aaron R Brenner
- Department of Internal Medicine, University of Kentucky College of Medicine, Lexington, KY 40536, United States
| | - Moamen M Gabr
- Division of Digestive Diseases and Nutrition, University of Kentucky College of Medicine, Lexington, KY 40536, United States
| | - Fadlallah G Habr
- Division of Gastroenterology, Warren Alpert Medical School of Brown University, Providence, RI 02903, United States
| | - Amporn Atsawarungruangkit
- Division of Gastroenterology, Warren Alpert Medical School of Brown University, Providence, RI 02903, United States
| |
Collapse
|