1
|
Faur IF, Dobrescu A, Clim IA, Pasca P, Prodan-Barbulescu C, Tarta C, Neamtu C, Isaic A, Brebu D, Braicu V, Feier CVI, Duta C, Totolici B. Sentinel Lymph Node Biopsy in Breast Cancer Using Different Types of Tracers According to Molecular Subtypes and Breast Density-A Randomized Clinical Study. Diagnostics (Basel) 2024; 14:2439. [PMID: 39518406 PMCID: PMC11545725 DOI: 10.3390/diagnostics14212439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2024] [Revised: 10/18/2024] [Accepted: 10/23/2024] [Indexed: 11/16/2024] Open
Abstract
Background: Sentinel lymph node biopsy (SLNB) has become a method more and more frequently used in loco-regional breast cancer in the initial stages. Starting from the first report on the technical feasibility of the sentinel node method in breast cancer, published by Krag (1993) and Giuliano (1994), the method underwent numerous improvements and was also largely used worldwide. Methods: This article is a prospective study that took place at the "SJUPBT Surgery Clinic Timisoara" over a period of 1 year between July 2023 and July 2024, during which 137 underwent sentinel lymph node biopsy (SLNB) based on the current guidelines. For the identification of sentinel lymph nodes, we used various methods, including single traces and also a dual tracer and triple tracer. Results: Breast density represents a predictive biomarker for the identification rate of a sentinel node, being directly correlated with BMI (above 30 kg/m2) and with an age of above 50 years. The classification of the patients according to breast density represents an important criterion given that an adipose breast density (Tabar-Gram I-II) represents a lower IR of SLN compared with a density of the fibro-nodular type (Tabar-Gram III-V). We did not obtain any statistically significant data for the linear correlations between IR and the molecular profile, whether referring to the luminal subtypes (Luminal A and Luminal B) or to the non-luminal ones (HER2+ and TNBC), with p > 0.05, 0.201 [0.88, 0.167]; z = 1.82.
Collapse
Affiliation(s)
- Ionut Flaviu Faur
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
- Multidisciplinary Doctoral School “Vasile Goldiș”, Western University of Arad, 310025 Arad, Romania
| | - Amadeus Dobrescu
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
| | - Ioana Adelina Clim
- II Obstetrics and Gynecology Clinic “Dominic Stanca”, 400124 Cluj-Napoca, Romania;
| | - Paul Pasca
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
| | - Catalin Prodan-Barbulescu
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- Department I, Discipline of Anatomy and Embriology, “Victor Babes” University of Medicine and Pharmacy, 300041 Timisoara, Romania
- Doctoral School, “Victor Babes” University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square 2, 300041 Timisoara, Romania
| | - Cristi Tarta
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
| | - Carmen Neamtu
- Faculty of Dentistry, “Vasile Goldiș” Western University of Arad, 310025 Arad, Romania;
- I Clinic of General Surgery, Arad County Emergency Clinical Hospital, 310158 Arad, Romania;
| | - Alexandru Isaic
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
| | - Dan Brebu
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
| | - Vlad Braicu
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
| | - Catalin Vladut Ionut Feier
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
- First Surgery Clinic, “Pius Brinzeu” Clinical Emergency Hospital, 300723 Timisoara, Romania
| | - Ciprian Duta
- II Surgery Clinic, Timisoara Emergency County Hospital, 300723 Timisoara, Romania; (A.D.); (P.P.); (C.P.-B.); (C.T.); (A.I.); (D.B.); (V.B.); (C.D.)
- X Department of General Surgery, “Victor Babes” University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania;
| | - Bogdan Totolici
- I Clinic of General Surgery, Arad County Emergency Clinical Hospital, 310158 Arad, Romania;
- Department of General Surgery, Faculty of Medicine, “Vasile Goldiș” Western University of Arad, 310025 Arad, Romania
| |
Collapse
|
2
|
Ripaud E, Jailin C, Quintana GI, Milioni de Carvalho P, Sanchez de la Rosa R, Vancamberg L. Deep-learning model for background parenchymal enhancement classification in contrast-enhanced mammography. Phys Med Biol 2024; 69:115013. [PMID: 38657641 DOI: 10.1088/1361-6560/ad42ff] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 04/24/2024] [Indexed: 04/26/2024]
Abstract
Background.Breast background parenchymal enhancement (BPE) is correlated with the risk of breast cancer. BPE level is currently assessed by radiologists in contrast-enhanced mammography (CEM) using 4 classes: minimal, mild, moderate and marked, as described inbreast imaging reporting and data system(BI-RADS). However, BPE classification remains subject to intra- and inter-reader variability. Fully automated methods to assess BPE level have already been developed in breast contrast-enhanced MRI (CE-MRI) and have been shown to provide accurate and repeatable BPE level classification. However, to our knowledge, no BPE level classification tool is available in the literature for CEM.Materials and methods.A BPE level classification tool based on deep learning has been trained and optimized on 7012 CEM image pairs (low-energy and recombined images) and evaluated on a dataset of 1013 image pairs. The impact of image resolution, backbone architecture and loss function were analyzed, as well as the influence of lesion presence and type on BPE assessment. The evaluation of the model performance was conducted using different metrics including 4-class balanced accuracy and mean absolute error. The results of the optimized model for a binary classification: minimal/mild versus moderate/marked, were also investigated.Results.The optimized model achieved a 4-class balanced accuracy of 71.5% (95% CI: 71.2-71.9) with 98.8% of classification errors between adjacent classes. For binary classification, the accuracy reached 93.0%. A slight decrease in model accuracy is observed in the presence of lesions, but it is not statistically significant, suggesting that our model is robust to the presence of lesions in the image for a classification task. Visual assessment also confirms that the model is more affected by non-mass enhancements than by mass-like enhancements.Conclusion.The proposed BPE classification tool for CEM achieves similar results than what is published in the literature for CE-MRI.
Collapse
|
3
|
Zimmermann C, Michelmann A, Daniel Y, Enderle MD, Salkic N, Linzenbold W. Application of Deep Learning for Real-Time Ablation Zone Measurement in Ultrasound Imaging. Cancers (Basel) 2024; 16:1700. [PMID: 38730652 PMCID: PMC11083655 DOI: 10.3390/cancers16091700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 04/24/2024] [Accepted: 04/26/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND The accurate delineation of ablation zones (AZs) is crucial for assessing radiofrequency ablation (RFA) therapy's efficacy. Manual measurement, the current standard, is subject to variability and potential inaccuracies. AIM This study aims to assess the effectiveness of Artificial Intelligence (AI) in automating AZ measurements in ultrasound images and compare its accuracy with manual measurements in ultrasound images. METHODS An in vitro study was conducted using chicken breast and liver samples subjected to bipolar RFA. Ultrasound images were captured every 15 s, with the AI model Mask2Former trained for AZ segmentation. The measurements were compared across all methods, focusing on short-axis (SA) metrics. RESULTS We performed 308 RFA procedures, generating 7275 ultrasound images across liver and chicken breast tissues. Manual and AI measurement comparisons for ablation zone diameters revealed no significant differences, with correlation coefficients exceeding 0.96 in both tissues (p < 0.001). Bland-Altman plots and a Deming regression analysis demonstrated a very close alignment between AI predictions and manual measurements, with the average difference between the two methods being -0.259 and -0.243 mm, for bovine liver and chicken breast tissue, respectively. CONCLUSION The study validates the Mask2Former model as a promising tool for automating AZ measurement in RFA research, offering a significant step towards reducing manual measurement variability.
Collapse
Affiliation(s)
| | | | | | | | - Nermin Salkic
- Erbe Elektromedizin GmbH, 72072 Tübingen, Germany
- Faculty of Medicine, University of Tuzla, 75000 Tuzla, Bosnia and Herzegovina
| | | |
Collapse
|
4
|
Koseoglu M, Ramachandran RA, Ozdemir H, Ariani MD, Bayindir F, Sukotjo C. Automated facial landmark measurement using machine learning: A feasibility study. J Prosthet Dent 2024:S0022-3913(24)00282-8. [PMID: 38670909 DOI: 10.1016/j.prosdent.2024.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 04/09/2024] [Accepted: 04/10/2024] [Indexed: 04/28/2024]
Abstract
STATEMENT OF PROBLEM Information regarding facial landmark measurement using machine learning (ML) techniques in prosthodontics is lacking. PURPOSE The objective of this study was to evaluate and compare the reliability, validity, and accuracy of facial anthropological measurements using both manual and ML landmark detection techniques. MATERIAL AND METHODS Two-dimensional (2D) frontal full-face photographs of 50 men and 50 women were made. The interpupillary width (IPW), interlateral canthus width (LCW), intermedial canthus width (MCW), interalar width (IAW), and intercommissural width (ICW) were measured on 2D digital images using manual and ML methods. The automated measurements were recorded using a programming language (Python), and a convolutional neural network (CNN) model was trained to detect human facial landmarks. The obtained data from the manual and ML methods were analyzed using intraclass correlation coefficients (ICCs), the paired sample t test, Bland-Altman plots, and the Pearson correlation analysis (α=.05). RESULTS Intrarater and interrater reliability values were greater than 0.90, indicating excellent reliability. The mean difference between the manual and ML measurements of IPW, MCW, IAW, and ICW was 0.02 mm, while it was 0.01 mm for LCW. No statistically significant differences were found between the measurements obtained by the manual and ML methods (P>.05). Highly significant positive correlations (P<.001) were obtained between the results of the manual and ML methods: (r=0.996[IPW], r=0.977[LCW], r=0.944[MCW], r=0.965[IAW], and r=0.997[ICW]). CONCLUSIONS In the field of prosthodontics, the use of ML methods provides a reliable alternative to manual digital techniques for carrying out facial anthropometric measurements.
Collapse
Affiliation(s)
- Merve Koseoglu
- Associate Professor, Department of Prosthodontics, Faculty of Dentistry, University of Sakarya, Sakarya, Turkey and Ph.D student, Department of Prosthodontics, Faculty of Dentistry, University of Ataturk, Erzurum, Turkey
| | - Remya Ampadi Ramachandran
- Fellow (Postdoc), 1DATA Consortium, Computational Comparative Medicine, Department of Mathematics, K- State Olathe, Olathe, Kansas
| | - Hatice Ozdemir
- Associate Professor, Department of Prosthodontics, Faculty of Dentistry, University of Ataturk, Erzurum, Turkey
| | - Maretaningtias Dwi Ariani
- Lecturer, Department of Prosthodontic, Faculty of Dental Medicine, Universitas Airlangga, Surabaya, Indonesia
| | - Funda Bayindir
- Professor, Department of Prosthodontics, Faculty of Dentistry, University of Ataturk, Erzurum, Turkey
| | - Cortino Sukotjo
- Professor, Department of Restorative Dentistry, College of Dentistry, University of Illinois, Chicago, Ill; and Adjunct Professor, Department of Prosthodontic, Faculty of Dental Medicine, Universitas Airlangga, Surabaya, Indonesia.
| |
Collapse
|
5
|
Zhong Y, Piao Y, Zhang G. Multi-view fusion-based local-global dynamic pyramid convolutional cross-tansformer network for density classification in mammography. Phys Med Biol 2023; 68:225012. [PMID: 37827166 DOI: 10.1088/1361-6560/ad02d7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 10/12/2023] [Indexed: 10/14/2023]
Abstract
Object.Breast density is an important indicator of breast cancer risk. However, existing methods for breast density classification do not fully utilise the multi-view information produced by mammography and thus have limited classification accuracy.Method.In this paper, we propose a multi-view fusion network, denoted local-global dynamic pyramidal-convolution transformer network (LG-DPTNet), for breast density classification in mammography. First, for single-view feature extraction, we develop a dynamic pyramid convolutional network to enable the network to adaptively learn global and local features. Second, we address the problem exhibited by traditional multi-view fusion methods, this is based on a cross-transformer that integrates fine-grained information and global contextual information from different views and thereby provides accurate predictions for the network. Finally, we use an asymmetric focal loss function instead of traditional cross-entropy loss during network training to solve the problem of class imbalance in public datasets, thereby further improving the performance of the model.Results.We evaluated the effectiveness of our method on two publicly available mammography datasets, CBIS-DDSM and INbreast, and achieved areas under the curve (AUC) of 96.73% and 91.12%, respectively.Conclusion.Our experiments demonstrated that the devised fusion model can more effectively utilise the information contained in multiple views than existing models and exhibits classification performance that is superior to that of baseline and state-of-the-art methods.
Collapse
Affiliation(s)
- Yutong Zhong
- Electronic Information Engineering School, Changchun University of Science and Technology, Changchun, People's Republic of China
| | - Yan Piao
- Electronic Information Engineering School, Changchun University of Science and Technology, Changchun, People's Republic of China
| | - Guohui Zhang
- Department of Pneumoconiosis Diagnosis and Treatment Center, Occupational Preventive and Treatment Hospital in Jilin Province, Changchun, People's Republic of China
| |
Collapse
|
6
|
Seth I, Bulloch G, Joseph K, Hunter-Smith DJ, Rozen WM. Use of Artificial Intelligence in the Advancement of Breast Surgery and Implications for Breast Reconstruction: A Narrative Review. J Clin Med 2023; 12:5143. [PMID: 37568545 PMCID: PMC10419723 DOI: 10.3390/jcm12155143] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 07/28/2023] [Accepted: 08/04/2023] [Indexed: 08/13/2023] Open
Abstract
BACKGROUND Breast reconstruction is a pivotal part of the recuperation process following a mastectomy and aims to restore both the physical aesthetic and emotional well-being of breast cancer survivors. In recent years, artificial intelligence (AI) has emerged as a revolutionary technology across numerous medical disciplines. This narrative review of the current literature and evidence analysis explores the role of AI in the domain of breast reconstruction, outlining its potential to refine surgical procedures, enhance outcomes, and streamline decision making. METHODS A systematic search on Medline (via PubMed), Cochrane Library, Web of Science, Google Scholar, Clinical Trials, and Embase databases from January 1901 to June 2023 was conducted. RESULTS By meticulously evaluating a selection of recent studies and engaging with inherent challenges and prospective trajectories, this review spotlights the promising role AI plays in advancing the techniques of breast reconstruction. However, issues concerning data quality, privacy, and ethical considerations pose hurdles to the seamless integration of AI in the medical field. CONCLUSION The future research agenda comprises dataset standardization, AI algorithm refinement, and the implementation of prospective clinical trials and fosters cross-disciplinary partnerships. The fusion of AI with other emergent technologies like augmented reality and 3D printing could further propel progress in breast surgery.
Collapse
Affiliation(s)
- Ishith Seth
- Department of Plastic Surgery, Peninsula Health, Melbourne, VIC 3199, Australia
- Faculty of Medicine, The University of Melbourne, Melbourne, VIC 3053, Australia
| | - Gabriella Bulloch
- Faculty of Medicine, The University of Melbourne, Melbourne, VIC 3053, Australia
| | - Konrad Joseph
- Faculty of Medicine, The University of Wollongong, Wollongon, NSW 2500, Australia
| | | | - Warren Matthew Rozen
- Department of Plastic Surgery, Peninsula Health, Melbourne, VIC 3199, Australia
- Faculty of Medicine, The University of Melbourne, Melbourne, VIC 3053, Australia
| |
Collapse
|
7
|
Acciavatti RJ, Lee SH, Reig B, Moy L, Conant EF, Kontos D, Moon WK. Beyond Breast Density: Risk Measures for Breast Cancer in Multiple Imaging Modalities. Radiology 2023; 306:e222575. [PMID: 36749212 PMCID: PMC9968778 DOI: 10.1148/radiol.222575] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 11/23/2022] [Accepted: 12/05/2022] [Indexed: 02/08/2023]
Abstract
Breast density is an independent risk factor for breast cancer. In digital mammography and digital breast tomosynthesis, breast density is assessed visually using the four-category scale developed by the American College of Radiology Breast Imaging Reporting and Data System (5th edition as of November 2022). Epidemiologically based risk models, such as the Tyrer-Cuzick model (version 8), demonstrate superior modeling performance when mammographic density is incorporated. Beyond just density, a separate mammographic measure of breast cancer risk is parenchymal textural complexity. With advancements in radiomics and deep learning, mammographic textural patterns can be assessed quantitatively and incorporated into risk models. Other supplemental screening modalities, such as breast US and MRI, offer independent risk measures complementary to those derived from mammography. Breast US allows the two components of fibroglandular tissue (stromal and glandular) to be visualized separately in a manner that is not possible with mammography. A higher glandular component at screening breast US is associated with higher risk. With MRI, a higher background parenchymal enhancement of the fibroglandular tissue has also emerged as an imaging marker for risk assessment. Imaging markers observed at mammography, US, and MRI are powerful tools in refining breast cancer risk prediction, beyond mammographic density alone.
Collapse
Affiliation(s)
| | | | - Beatriu Reig
- From the Department of Radiology, University of Pennsylvania, 3400
Spruce St, Philadelphia, PA 19104 (R.J.A., E.F.C., D.K.); Department of
Radiology, Seoul National University Hospital, Seoul, South Korea (S.H.L.,
W.K.M.); and Department of Radiology, NYU Langone Health, New York, NY (B.R.,
L.M.)
| | - Linda Moy
- From the Department of Radiology, University of Pennsylvania, 3400
Spruce St, Philadelphia, PA 19104 (R.J.A., E.F.C., D.K.); Department of
Radiology, Seoul National University Hospital, Seoul, South Korea (S.H.L.,
W.K.M.); and Department of Radiology, NYU Langone Health, New York, NY (B.R.,
L.M.)
| | - Emily F. Conant
- From the Department of Radiology, University of Pennsylvania, 3400
Spruce St, Philadelphia, PA 19104 (R.J.A., E.F.C., D.K.); Department of
Radiology, Seoul National University Hospital, Seoul, South Korea (S.H.L.,
W.K.M.); and Department of Radiology, NYU Langone Health, New York, NY (B.R.,
L.M.)
| | | | | |
Collapse
|
8
|
Razali NF, Isa IS, Sulaiman SN, Abdul Karim NK, Osman MK, Che Soh ZH. Enhancement Technique Based on the Breast Density Level for Mammogram for Computer-Aided Diagnosis. Bioengineering (Basel) 2023; 10:153. [PMID: 36829647 PMCID: PMC9952042 DOI: 10.3390/bioengineering10020153] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 01/04/2023] [Accepted: 01/16/2023] [Indexed: 01/26/2023] Open
Abstract
Mass detection in mammograms has a limited approach to the presence of a mass in overlapping denser fibroglandular breast regions. In addition, various breast density levels could decrease the learning system's ability to extract sufficient feature descriptors and may result in lower accuracy performance. Therefore, this study is proposing a textural-based image enhancement technique named Spatial-based Breast Density Enhancement for Mass Detection (SbBDEM) to boost textural features of the overlapped mass region based on the breast density level. This approach determines the optimal exposure threshold of the images' lower contrast limit and optimizes the parameters by selecting the best intensity factor guided by the best Blind/Reference-less Image Spatial Quality Evaluator (BRISQUE) scores separately for both dense and non-dense breast classes prior to training. Meanwhile, a modified You Only Look Once v3 (YOLOv3) architecture is employed for mass detection by specifically assigning an extra number of higher-valued anchor boxes to the shallower detection head using the enhanced image. The experimental results show that the use of SbBDEM prior to training mass detection promotes superior performance with an increase in mean Average Precision (mAP) of 17.24% improvement over the non-enhanced trained image for mass detection, mass segmentation of 94.41% accuracy, and 96% accuracy for benign and malignant mass classification. Enhancing the mammogram images based on breast density is proven to increase the overall system's performance and can aid in an improved clinical diagnosis process.
Collapse
Affiliation(s)
- Noor Fadzilah Razali
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
| | - Iza Sazanita Isa
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
| | - Siti Noraini Sulaiman
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
- Integrative Pharmacogenomics Institute (iPROMISE), Universiti Teknologi MARA Cawangan Selangor, Puncak Alam Campus, Puncak Alam 42300, Selangor, Malaysia
| | - Noor Khairiah Abdul Karim
- Department of Biomedical Imaging, Advanced Medical and Dental Institute, Universiti Sains Malaysia Bertam, Kepala Batas 13200, Pulau Pinang, Malaysia
- Breast Cancer Translational Research Programme (BCTRP), Advanced Medical and Dental Institute, Universiti Sains Malaysia Bertam, Kepala Batas 13200, Pulau Pinang, Malaysia
| | - Muhammad Khusairi Osman
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
| | - Zainal Hisham Che Soh
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
| |
Collapse
|
9
|
Yamamuro M, Asai Y, Hashimoto N, Yasuda N, Kimura H, Yamada T, Nemoto M, Kimura Y, Handa H, Yoshida H, Abe K, Tada M, Habe H, Nagaoka T, Nin S, Ishii K, Kondo Y. Utility of U-Net for the objective segmentation of the fibroglandular tissue region on clinical digital mammograms. Biomed Phys Eng Express 2022; 8. [PMID: 35728581 DOI: 10.1088/2057-1976/ac7ada] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 06/21/2022] [Indexed: 11/11/2022]
Abstract
This study investigates the equivalence or compatibility between U-Net and visual segmentations of fibroglandular tissue regions by mammography experts for calculating the breast density and mean glandular dose (MGD). A total of 703 mediolateral oblique-view mammograms were used for segmentation. Two region types were set as the ground truth (determined visually): (1) one type included only the region where fibroglandular tissue was identifiable (called the 'dense region'); (2) the other type included the region where the fibroglandular tissue may have existed in the past, provided that apparent adipose-only parts, such as the retromammary space, are excluded (the 'diffuse region'). U-Net was trained to segment the fibroglandular tissue region with an adaptive moment estimation optimiser, five-fold cross-validated with 400 training and 100 validation mammograms, and tested with 203 mammograms. The breast density and MGD were calculated using the van Engeland and Dance formulas, respectively, and compared between U-Net and the ground truth with the Dice similarity coefficient and Bland-Altman analysis. Dice similarity coefficients between U-Net and the ground truth were 0.895 and 0.939 for the dense and diffuse regions, respectively. In the Bland-Altman analysis, no proportional or fixed errors were discovered in either the dense or diffuse region for breast density, whereas a slight proportional error was discovered in both regions for the MGD (the slopes of the regression lines were -0.0299 and -0.0443 for the dense and diffuse regions, respectively). Consequently, the U-Net and ground truth were deemed equivalent (interchangeable) for breast density and compatible (interchangeable following four simple arithmetic operations) for MGD. U-Net-based segmentation of the fibroglandular tissue region was satisfactory for both regions, providing reliable segmentation for breast density and MGD calculations. U-Net will be useful in developing a reliable individualised screening-mammography programme, instead of relying on the visual judgement of mammography experts.
Collapse
Affiliation(s)
- Mika Yamamuro
- Radiology Center, Kindai University Hospital, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan.,Graduate School of Health Sciences, Niigata University, 2-746, Asahimachidori, Chuouku, Niigata 951-8518, Japan
| | - Yoshiyuki Asai
- Radiology Center, Kindai University Hospital, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Naomi Hashimoto
- Radiology Center, Kindai University Hospital, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Nao Yasuda
- Radiology Center, Kindai University Hospital, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Hiorto Kimura
- Radiology Center, Kindai University Hospital, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Takahiro Yamada
- Division of Positron Emission Tomography Institute of Advanced Clinical Medicine, Kindai University, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Mitsutaka Nemoto
- Department of Computational Systems Biology, Kindai University Faculty of Biology-Oriented Science and Technology, 930, Nishimitani, Kinokawa, Wakayama 649-6433, Japan
| | - Yuichi Kimura
- Department of Computational Systems Biology, Kindai University Faculty of Biology-Oriented Science and Technology, 930, Nishimitani, Kinokawa, Wakayama 649-6433, Japan
| | - Hisashi Handa
- Department of Informatics, Kindai University Faculty of Science and Engineering, 3-4-1, Kowakae, Higashi-osaka, Osaka 577-8502, Japan
| | - Hisashi Yoshida
- Department of Computational Systems Biology, Kindai University Faculty of Biology-Oriented Science and Technology, 930, Nishimitani, Kinokawa, Wakayama 649-6433, Japan
| | - Koji Abe
- Department of Informatics, Kindai University Faculty of Science and Engineering, 3-4-1, Kowakae, Higashi-osaka, Osaka 577-8502, Japan
| | - Masahiro Tada
- Department of Informatics, Kindai University Faculty of Science and Engineering, 3-4-1, Kowakae, Higashi-osaka, Osaka 577-8502, Japan
| | - Hitoshi Habe
- Department of Informatics, Kindai University Faculty of Science and Engineering, 3-4-1, Kowakae, Higashi-osaka, Osaka 577-8502, Japan
| | - Takashi Nagaoka
- Department of Computational Systems Biology, Kindai University Faculty of Biology-Oriented Science and Technology, 930, Nishimitani, Kinokawa, Wakayama 649-6433, Japan
| | - Seiun Nin
- Department of Radiology, Kindai University Faculty of Medicine, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Kazunari Ishii
- Department of Radiology, Kindai University Faculty of Medicine, 377-2, Ono-higashi, Osaka-sayama, Osaka 589-8511, Japan
| | - Yohan Kondo
- Graduate School of Health Sciences, Niigata University, 2-746, Asahimachidori, Chuouku, Niigata 951-8518, Japan
| |
Collapse
|
10
|
Li H, Mukundan R, Boyd S. Spatial Distribution Analysis of Novel Texture Feature Descriptors for Accurate Breast Density Classification. SENSORS 2022; 22:s22072672. [PMID: 35408286 PMCID: PMC9002800 DOI: 10.3390/s22072672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 03/27/2022] [Accepted: 03/28/2022] [Indexed: 12/10/2022]
Abstract
Breast density has been recognised as an important biomarker that indicates the risk of developing breast cancer. Accurate classification of breast density plays a crucial role in developing a computer-aided detection (CADe) system for mammogram interpretation. This paper proposes a novel texture descriptor, namely, rotation invariant uniform local quinary patterns (RIU4-LQP), to describe texture patterns in mammograms and to improve the robustness of image features. In conventional processing schemes, image features are obtained by computing histograms from texture patterns. However, such processes ignore very important spatial information related to the texture features. This study designs a new feature vector, namely, K-spectrum, by using Baddeley's K-inhom function to characterise the spatial distribution information of feature point sets. Texture features extracted by RIU4-LQP and K-spectrum are utilised to classify mammograms into BI-RADS density categories. Three feature selection methods are employed to optimise the feature set. In our experiment, two mammogram datasets, INbreast and MIAS, are used to test the proposed methods, and comparative analyses and statistical tests between different schemes are conducted. Experimental results show that our proposed method outperforms other approaches described in the literature, with the best classification accuracy of 92.76% (INbreast) and 86.96% (MIAS).
Collapse
Affiliation(s)
- Haipeng Li
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8140, New Zealand;
- Correspondence:
| | - Ramakrishnan Mukundan
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8140, New Zealand;
| | - Shelley Boyd
- Canterbury Breastcare, St. George’s Medical Centre, Christchurch 8014, New Zealand;
| |
Collapse
|
11
|
Li S, Xie Y, Wang G, Zhang L, Zhou W. Adaptive multimodal fusion with attention guided deep supervision net for grading hepatocellular carcinoma. IEEE J Biomed Health Inform 2022; 26:4123-4131. [PMID: 35344499 DOI: 10.1109/jbhi.2022.3161466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Multimodal medical imaging plays a crucial role in the diagnosis and characterization of lesions. However, challenges remain in lesion characterization based on multimodal feature fusion. First, current fusion methods have not thoroughly studied the relative importance of characterization modals. In addition, multimodal feature fusion cannot provide the contribution of different modal information to inform critical decision-making. In this study, we propose an adaptive multimodal fusion method with an attention-guided deep supervision net for grading hepatocellular carcinoma (HCC). Specifically, our proposed framework comprises two modules: attention-based adaptive feature fusion and attention-guided deep supervision net. The former uses the attention mechanism at the feature fusion level to generate weights for adaptive feature concatenation and balances the importance of features among various modals. The latter uses the weight generated by the attention mechanism as the weight coefficient of each loss to balance the contribution of the corresponding modal to the total loss function. The experimental results of grading clinical HCC with contrast-enhanced MR demonstrated the effectiveness of the proposed method. A significant performance improvement was achieved compared with existing fusion methods. In addition, the weight coefficient of attention in multimodal fusion has demonstrated great significance in clinical interpretation.
Collapse
|
12
|
Rousseau M, Retrouvey JM. Machine learning in orthodontics: Automated facial analysis of vertical dimension for increased precision and efficiency. Am J Orthod Dentofacial Orthop 2022; 161:445-450. [DOI: 10.1016/j.ajodo.2021.03.017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Revised: 03/01/2021] [Accepted: 03/01/2021] [Indexed: 11/15/2022]
|
13
|
Gastounioti A, Desai S, Ahluwalia VS, Conant EF, Kontos D. Artificial intelligence in mammographic phenotyping of breast cancer risk: a narrative review. Breast Cancer Res 2022; 24:14. [PMID: 35184757 PMCID: PMC8859891 DOI: 10.1186/s13058-022-01509-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 02/08/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Improved breast cancer risk assessment models are needed to enable personalized screening strategies that achieve better harm-to-benefit ratio based on earlier detection and better breast cancer outcomes than existing screening guidelines. Computational mammographic phenotypes have demonstrated a promising role in breast cancer risk prediction. With the recent exponential growth of computational efficiency, the artificial intelligence (AI) revolution, driven by the introduction of deep learning, has expanded the utility of imaging in predictive models. Consequently, AI-based imaging-derived data has led to some of the most promising tools for precision breast cancer screening. MAIN BODY This review aims to synthesize the current state-of-the-art applications of AI in mammographic phenotyping of breast cancer risk. We discuss the fundamentals of AI and explore the computing advancements that have made AI-based image analysis essential in refining breast cancer risk assessment. Specifically, we discuss the use of data derived from digital mammography as well as digital breast tomosynthesis. Different aspects of breast cancer risk assessment are targeted including (a) robust and reproducible evaluations of breast density, a well-established breast cancer risk factor, (b) assessment of a woman's inherent breast cancer risk, and (c) identification of women who are likely to be diagnosed with breast cancers after a negative or routine screen due to masking or the rapid and aggressive growth of a tumor. Lastly, we discuss AI challenges unique to the computational analysis of mammographic imaging as well as future directions for this promising research field. CONCLUSIONS We provide a useful reference for AI researchers investigating image-based breast cancer risk assessment while indicating key priorities and challenges that, if properly addressed, could accelerate the implementation of AI-assisted risk stratification to future refine and individualize breast cancer screening strategies.
Collapse
Affiliation(s)
- Aimilia Gastounioti
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.,Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Shyam Desai
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Vinayak S Ahluwalia
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.,Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Emily F Conant
- Department of Radiology, Hospital of the University of Pennsylvania, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Despina Kontos
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
14
|
Luca AR, Ursuleanu TF, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Grigorovici A. Impact of quality, type and volume of data used by deep learning models in the analysis of medical images. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
|
15
|
Abstract
This article gives a brief overview of the development of artificial intelligence in clinical breast imaging. For multiple decades, artificial intelligence (AI) methods have been developed and translated for breast imaging tasks such as detection, diagnosis, and assessing response to therapy. As imaging modalities arise to support breast cancer screening programs and diagnostic examinations, including full-field digital mammography, breast tomosynthesis, ultrasound, and MRI, AI techniques parallel the efforts with more complex algorithms, faster computers, and larger data sets. AI methods include human-engineered radiomics algorithms and deep learning methods. Examples of these AI-supported clinical tasks are given along with commentary on the future.
Collapse
Affiliation(s)
- Qiyuan Hu
- Committee on Medical Physics, Department of Radiology, The University of Chicago, 5841 S Maryland Avenue, MC2026, Chicago, IL 60637, USA
| | - Maryellen L Giger
- Committee on Medical Physics, Department of Radiology, The University of Chicago, 5841 S Maryland Avenue, MC2026, Chicago, IL 60637, USA.
| |
Collapse
|
16
|
Inoue K, Kawasaki A, Koshimizu K, Ariizumi C, Unno K, Nagashima M, Mizuno K, Misumi M, Tsutsumi C, Sasaki T, Doi T. [Automatic Quantification of Breast Density from Mammography Using Deep Learning]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2021; 77:1165-1172. [PMID: 34670923 DOI: 10.6009/jjrt.2021_jsrt_77.10.1165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
BACKGROUND In the field of breast screening using mammography, announcing to the examinees whether they are dense or not has not been deprecated in Japan. One of the reasons is a shortage of objectivity estimating their dense breast. Our aim is to build a system with deep learning algorithm to calculate and quantify objective breast density automatically. MATERIAL AND METHOD Mammography images taken in our institute that were diagnosed as category 1 were collected. Each processed image was transformed into eight-bit grayscale, with the size of 2294 pixels by 1914 pixels. The "base pixel value" was calculated from the fatty area within the breast for each image. The "relative density" was calculated by dividing each pixel value by the base pixel value. Semantic segmentation algorithm was used to automatically segment the area of breast tissue within the mammography image, which was resized to 144 pixels by 120 pixels. By aggregating the relative density within the breast tissue area, the "breast density" was obtained automatically. RESULT From each but one mammography image, the breast density was successfully calculated automatically. By defining a dense breast as the breast density being greater than or equal to 30%, the evaluation of the dense breast was consistent with that by a computer and human (76.6%). CONCLUSION Deep learning provides an excellent estimation of quantification of breast density. This system could contribute to improve the efficiency of mammography screening system.
Collapse
Affiliation(s)
| | | | | | | | - Keiko Unno
- Shonan Memorial Hospital, Breast Cancer Center
| | | | - Kayo Mizuno
- Shonan Memorial Hospital, Breast Cancer Center
| | | | | | - Takeshi Sasaki
- Department of Next-Generation Pathology Information Networking, Faculty of Medicine, The University of Tokyo
| | - Takako Doi
- Shonan Memorial Hospital, Breast Cancer Center
| |
Collapse
|
17
|
Li H, Mukundan R, Boyd S. Novel Texture Feature Descriptors Based on Multi-Fractal Analysis and LBP for Classifying Breast Density in Mammograms. J Imaging 2021; 7:jimaging7100205. [PMID: 34677291 PMCID: PMC8540831 DOI: 10.3390/jimaging7100205] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Revised: 09/26/2021] [Accepted: 10/01/2021] [Indexed: 11/16/2022] Open
Abstract
This paper investigates the usefulness of multi-fractal analysis and local binary patterns (LBP) as texture descriptors for classifying mammogram images into different breast density categories. Multi-fractal analysis is also used in the pre-processing step to segment the region of interest (ROI). We use four multi-fractal measures and the LBP method to extract texture features, and to compare their classification performance in experiments. In addition, a feature descriptor combining multi-fractal features and multi-resolution LBP (MLBP) features is proposed and evaluated in this study to improve classification accuracy. An autoencoder network and principal component analysis (PCA) are used for reducing feature redundancy in the classification model. A full field digital mammogram (FFDM) dataset, INBreast, which contains 409 mammogram images, is used in our experiment. BI-RADS density labels given by radiologists are used as the ground truth to evaluate the classification results using the proposed methods. Experimental results show that the proposed feature descriptor based on multi-fractal features and LBP result in higher classification accuracy than using individual texture feature sets.
Collapse
Affiliation(s)
- Haipeng Li
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8140, New Zealand;
- Correspondence:
| | - Ramakrishnan Mukundan
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8140, New Zealand;
| | - Shelley Boyd
- Canterbury Breastcare, St. George’s Medical Centre, Christchurch 8014, New Zealand;
| |
Collapse
|
18
|
Zhao W, Wang R, Qi Y, Lou M, Wang Y, Yang Y, Deng X, Ma Y. BASCNet: Bilateral adaptive spatial and channel attention network for breast density classification in the mammogram. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.103073] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
19
|
Ursuleanu TF, Luca AR, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Preda C, Grigorovici A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics (Basel) 2021; 11:1373. [PMID: 34441307 PMCID: PMC8393354 DOI: 10.3390/diagnostics11081373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/13/2022] Open
Abstract
The need for time and attention, given by the doctor to the patient, due to the increased volume of medical data to be interpreted and filtered for diagnostic and therapeutic purposes has encouraged the development of the option to support, constructively and effectively, deep learning models. Deep learning (DL) has experienced an exponential development in recent years, with a major impact on interpretations of the medical image. This has influenced the development, diversification and increase of the quality of scientific data, the development of knowledge construction methods and the improvement of DL models used in medical applications. All research papers focus on description, highlighting, classification of one of the constituent elements of deep learning models (DL), used in the interpretation of medical images and do not provide a unified picture of the importance and impact of each constituent in the performance of DL models. The novelty in our paper consists primarily in the unitary approach, of the constituent elements of DL models, namely, data, tools used by DL architectures or specifically constructed DL architecture combinations and highlighting their "key" features, for completion of tasks in current applications in the interpretation of medical images. The use of "key" characteristics specific to each constituent of DL models and the correct determination of their correlations, may be the subject of future research, with the aim of increasing the performance of DL models in the interpretation of medical images.
Collapse
Affiliation(s)
- Tudor Florin Ursuleanu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
- Department of Surgery I, Regional Institute of Oncology, 700483 Iasi, Romania
| | - Andreea Roxana Luca
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department Obstetrics and Gynecology, Integrated Ambulatory of Hospital “Sf. Spiridon”, 700106 Iasi, Romania
| | - Liliana Gheorghe
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Radiology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Roxana Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Stefan Iancu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Maria Hlusneac
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Cristina Preda
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Endocrinology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Alexandru Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| |
Collapse
|
20
|
Li C, Xu J, Liu Q, Zhou Y, Mou L, Pu Z, Xia Y, Zheng H, Wang S. Multi-View Mammographic Density Classification by Dilated and Attention-Guided Residual Learning. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:1003-1013. [PMID: 32012021 DOI: 10.1109/tcbb.2020.2970713] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Breast density is widely adopted to reflect the likelihood of early breast cancer development. Existing methods of mammographic density classification either require steps of manual operations or achieve only moderate classification accuracy due to the limited model capacity. In this study, we present a radiomics approach based on dilated and attention-guided residual learning for the task of mammographic density classification. The proposed method was instantiated with two datasets, one clinical dataset and one publicly available dataset, and classification accuracies of 88.7 and 70.0 percent were obtained, respectively. Although the classification accuracy of the public dataset was lower than the clinical dataset, which was very likely related to the dataset size, our proposed model still achieved a better performance than the naive residual networks and several recently published deep learning-based approaches. Furthermore, we designed a multi-stream network architecture specifically targeting at analyzing the multi-view mammograms. Utilizing the clinical dataset, we validated that multi-view inputs were beneficial to the breast density classification task with an increase of at least 2.0 percent in accuracy and the different views lead to different model classification capacities. Our method has a great potential to be further developed and applied in computer-aided diagnosis systems. Our code is available at https://github.com/lich0031/Mammographic_Density_Classification.
Collapse
|
21
|
Li Y, He Z, Lu Y, Ma X, Guo Y, Xie Z, Qin G, Xu W, Xu Z, Chen W, Chen H. Deep learning of mammary gland distribution for architectural distortion detection in digital breast tomosynthesis. Phys Med Biol 2021; 66:035028. [PMID: 32485700 DOI: 10.1088/1361-6560/ab98d0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Computer aided detection (CADe) for breast lesions can provide an important reference for radiologists in breast cancer screening. Architectural distortion (AD) is a type of breast lesion that is difficult to detect. A majority of CADe methods focus on detecting the radial pattern, which is a main characteristic of typical ADs. However, a few atypical ADs do not exhibit such a pattern. To improve the performance of CADe for typical and atypical ADs, we propose a deep-learning-based model that used mammary gland distribution as prior information to detect ADs in digital breast tomosynthesis (DBT). First, information about gland distribution, including the Gabor magnitude, the Gabor orientation field, and a convergence map, were produced using a bank of Gabor filters and convergence measures. Then, this prior information and an original slice were input into a Faster R-CNN detection network to obtain the 2-D candidates for each slice. Finally, a 3-D aggregation scheme was employed to fuse these 2-D candidates as 3-D candidates for each DBT volume. Retrospectively, 64 typical AD volumes, 74 atypical AD volumes, and 127 normal volumes were collected. Six-fold cross-validation and mean true positive fraction (MTPF) were used to evaluate the model. Compared to an existing convergence-based model, our proposed model achieved an MTPF of 0.53 ± 0.04, 0.61 ± 0.05, and 0.45 ± 0.04 for all DBT volumes, typical + normal volumes, and atypical + normal volumes, respectively. These results were significantly better than those of 0.36 ± 0.03, 0.46 ± 0.04, and 0.28 ± 0.04 for a convergence-based model (p ≪ 0.01). These results indicate that employing the prior information of gland distribution and a deep learning method can improve the performance of CADe for AD.
Collapse
Affiliation(s)
- Yue Li
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou, People's Republic of China. Authors contributed equally to this work
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
22
|
Chugh G, Kumar S, Singh N. Survey on Machine Learning and Deep Learning Applications in Breast Cancer Diagnosis. Cognit Comput 2021. [DOI: 10.1007/s12559-020-09813-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
23
|
Nogales A, García-Tejedor ÁJ, Monge D, Vara JS, Antón C. A survey of deep learning models in medical therapeutic areas. Artif Intell Med 2021; 112:102020. [PMID: 33581832 DOI: 10.1016/j.artmed.2021.102020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 12/21/2020] [Accepted: 01/10/2021] [Indexed: 12/18/2022]
Abstract
Artificial intelligence is a broad field that comprises a wide range of techniques, where deep learning is presently the one with the most impact. Moreover, the medical field is an area where data both complex and massive and the importance of the decisions made by doctors make it one of the fields in which deep learning techniques can have the greatest impact. A systematic review following the Cochrane recommendations with a multidisciplinary team comprised of physicians, research methodologists and computer scientists has been conducted. This survey aims to identify the main therapeutic areas and the deep learning models used for diagnosis and treatment tasks. The most relevant databases included were MedLine, Embase, Cochrane Central, Astrophysics Data System, Europe PubMed Central, Web of Science and Science Direct. An inclusion and exclusion criteria were defined and applied in the first and second peer review screening. A set of quality criteria was developed to select the papers obtained after the second screening. Finally, 126 studies from the initial 3493 papers were selected and 64 were described. Results show that the number of publications on deep learning in medicine is increasing every year. Also, convolutional neural networks are the most widely used models and the most developed area is oncology where they are used mainly for image analysis.
Collapse
Affiliation(s)
- Alberto Nogales
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Álvaro J García-Tejedor
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Diana Monge
- Faculty of Medicine, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Juan Serrano Vara
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Cristina Antón
- Faculty of Medicine, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| |
Collapse
|
24
|
|
25
|
Drukker K, Yan P, Sibley A, Wang G. Biomedical imaging and analysis through deep learning. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00004-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
26
|
Fully Automated Breast Density Segmentation and Classification Using Deep Learning. Diagnostics (Basel) 2020; 10:diagnostics10110988. [PMID: 33238512 PMCID: PMC7700286 DOI: 10.3390/diagnostics10110988] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 11/12/2020] [Accepted: 11/17/2020] [Indexed: 01/16/2023] Open
Abstract
Breast density estimation with visual evaluation is still challenging due to low contrast and significant fluctuations in the mammograms’ fatty tissue background. The primary key to breast density classification is to detect the dense tissues in the mammographic images correctly. Many methods have been proposed for breast density estimation; nevertheless, most of them are not fully automated. Besides, they have been badly affected by low signal-to-noise ratio and variability of density in appearance and texture. This study intends to develop a fully automated and digitalized breast tissue segmentation and classification using advanced deep learning techniques. The conditional Generative Adversarial Networks (cGAN) network is applied to segment the dense tissues in mammograms. To have a complete system for breast density classification, we propose a Convolutional Neural Network (CNN) to classify mammograms based on the standardization of Breast Imaging-Reporting and Data System (BI-RADS). The classification network is fed by the segmented masks of dense tissues generated by the cGAN network. For screening mammography, 410 images of 115 patients from the INbreast dataset were used. The proposed framework can segment the dense regions with an accuracy, Dice coefficient, Jaccard index of 98%, 88%, and 78%, respectively. Furthermore, we obtained precision, sensitivity, and specificity of 97.85%, 97.85%, and 99.28%, respectively, for breast density classification. This study’s findings are promising and show that the proposed deep learning-based techniques can produce a clinically useful computer-aided tool for breast density analysis by digital mammography.
Collapse
|
27
|
Borkowski K, Rossi C, Ciritsis A, Marcon M, Hejduk P, Stieb S, Boss A, Berger N. Fully automatic classification of breast MRI background parenchymal enhancement using a transfer learning approach. Medicine (Baltimore) 2020; 99:e21243. [PMID: 32702902 PMCID: PMC7373599 DOI: 10.1097/md.0000000000021243] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Marked enhancement of the fibroglandular tissue on contrast-enhanced breast magnetic resonance imaging (MRI) may affect lesion detection and classification and is suggested to be associated with higher risk of developing breast cancer. The background parenchymal enhancement (BPE) is qualitatively classified according to the BI-RADS atlas into the categories "minimal," "mild," "moderate," and "marked." The purpose of this study was to train a deep convolutional neural network (dCNN) for standardized and automatic classification of BPE categories.This IRB-approved retrospective study included 11,769 single MR images from 149 patients. The MR images were derived from the subtraction between the first post-contrast volume and the native T1-weighted images. A hierarchic approach was implemented relying on 2 dCNN models for detection of MR-slices imaging breast tissue and for BPE classification, respectively. Data annotation was performed by 2 board-certified radiologists. The consensus of the 2 radiologists was chosen as reference for BPE classification. The clinical performances of the single readers and of the dCNN were statistically compared using the quadratic Cohen's kappa.Slices depicting the breast were classified with training, validation, and real-world (test) accuracies of 98%, 96%, and 97%, respectively. Over the 4 classes, the BPE classification was reached with mean accuracies of 74% for training, 75% for the validation, and 75% for the real word dataset. As compared to the reference, the inter-reader reliabilities for the radiologists were 0.780 (reader 1) and 0.679 (reader 2). On the other hand, the reliability for the dCNN model was 0.815.Automatic classification of BPE can be performed with high accuracy and support the standardization of tissue classification in MRI.
Collapse
|
28
|
Rampun A, Morrow PJ, Scotney BW, Wang H. Breast density classification in mammograms: An investigation of encoding techniques in binary-based local patterns. Comput Biol Med 2020; 122:103842. [DOI: 10.1016/j.compbiomed.2020.103842] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Revised: 06/02/2020] [Accepted: 06/02/2020] [Indexed: 12/01/2022]
|
29
|
Chan HP, Samala RK, Hadjiiski LM. CAD and AI for breast cancer-recent development and challenges. Br J Radiol 2020; 93:20190580. [PMID: 31742424 PMCID: PMC7362917 DOI: 10.1259/bjr.20190580] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 11/13/2019] [Accepted: 11/17/2019] [Indexed: 12/15/2022] Open
Abstract
Computer-aided diagnosis (CAD) has been a popular area of research and development in the past few decades. In CAD, machine learning methods and multidisciplinary knowledge and techniques are used to analyze the patient information and the results can be used to assist clinicians in their decision making process. CAD may analyze imaging information alone or in combination with other clinical data. It may provide the analyzed information directly to the clinician or correlate the analyzed results with the likelihood of certain diseases based on statistical modeling of the past cases in the population. CAD systems can be developed to provide decision support for many applications in the patient care processes, such as lesion detection, characterization, cancer staging, treatment planning and response assessment, recurrence and prognosis prediction. The new state-of-the-art machine learning technique, known as deep learning (DL), has revolutionized speech and text recognition as well as computer vision. The potential of major breakthrough by DL in medical image analysis and other CAD applications for patient care has brought about unprecedented excitement of applying CAD, or artificial intelligence (AI), to medicine in general and to radiology in particular. In this paper, we will provide an overview of the recent developments of CAD using DL in breast imaging and discuss some challenges and practical issues that may impact the advancement of artificial intelligence and its integration into clinical workflow.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | - Ravi K. Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | | |
Collapse
|
30
|
Poortmans PMP, Takanen S, Marta GN, Meattini I, Kaidar-Person O. Winter is over: The use of Artificial Intelligence to individualise radiation therapy for breast cancer. Breast 2020; 49:194-200. [PMID: 31931265 PMCID: PMC7375562 DOI: 10.1016/j.breast.2019.11.011] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 11/16/2019] [Accepted: 11/20/2019] [Indexed: 01/08/2023] Open
Abstract
Artificial intelligence demonstrated its value for automated contouring of organs at risk and target volumes as well as for auto-planning of radiation dose distributions in terms of saving time, increasing consistency, and improving dose-volumes parameters. Future developments include incorporating dose/outcome data to optimise dose distributions with optimal coverage of the high-risk areas, while at the same time limiting doses to low-risk areas. An infinite gradient of volumes and doses to deliver spatially-adjusted radiation can be generated, allowing to avoid unnecessary radiation to organs at risk. Therefore, data about patient-, tumour-, and treatment-related factors have to be combined with dose distributions and outcome-containing databases.
Collapse
Affiliation(s)
| | - Silvia Takanen
- Institut Curie, Department of Radiation Oncology, Paris, France
| | - Gustavo Nader Marta
- Department of Radiation Oncology - Hospital Sírio-Libanês, Brazil; Department of Radiology and Oncology - Radiation Oncology, Instituto Do Câncer Do Estado de São Paulo (ICESP), Faculdade de Medicina da Universidade de São Paulo, Brazil
| | - Icro Meattini
- Department of Experimental and Clinical Biomedical Sciences "M. Serio", University of Florence, Florence, Italy; Radiation Oncology Unit, Oncology Department, Azienda Ospedaliero-Universitaria Careggi, Florence, Italy
| | - Orit Kaidar-Person
- Radiation Oncology Unit, Breast Radiation Unit, Sheba Tel Ha'shomer, Ramat Gan, Israel
| |
Collapse
|
31
|
Geras KJ, Mann RM, Moy L. Artificial Intelligence for Mammography and Digital Breast Tomosynthesis: Current Concepts and Future Perspectives. Radiology 2019; 293:246-259. [PMID: 31549948 DOI: 10.1148/radiol.2019182627] [Citation(s) in RCA: 151] [Impact Index Per Article: 30.2] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Although computer-aided diagnosis (CAD) is widely used in mammography, conventional CAD programs that use prompts to indicate potential cancers on the mammograms have not led to an improvement in diagnostic accuracy. Because of the advances in machine learning, especially with use of deep (multilayered) convolutional neural networks, artificial intelligence has undergone a transformation that has improved the quality of the predictions of the models. Recently, such deep learning algorithms have been applied to mammography and digital breast tomosynthesis (DBT). In this review, the authors explain how deep learning works in the context of mammography and DBT and define the important technical challenges. Subsequently, they discuss the current status and future perspectives of artificial intelligence-based clinical applications for mammography, DBT, and radiomics. Available algorithms are advanced and approach the performance of radiologists-especially for cancer detection and risk prediction at mammography. However, clinical validation is largely lacking, and it is not clear how the power of deep learning should be used to optimize practice. Further development of deep learning models is necessary for DBT, and this requires collection of larger databases. It is expected that deep learning will eventually have an important role in DBT, including the generation of synthetic images.
Collapse
Affiliation(s)
- Krzysztof J Geras
- From the Center for Biomedical Imaging (K.J.G., L.M.), Center for Data Science (K.J.G.), Center for Advanced Imaging Innovation and Research (L.M.), and Laura and Isaac Perlmutter Cancer Center (L.M.), New York University School of Medicine, 160 E 34th St, 3rd Floor, New York, NY 10016; Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Nijmegen, the Netherlands (R.M.M.); and Department of Radiology, the Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands (R.M.M.)
| | - Ritse M Mann
- From the Center for Biomedical Imaging (K.J.G., L.M.), Center for Data Science (K.J.G.), Center for Advanced Imaging Innovation and Research (L.M.), and Laura and Isaac Perlmutter Cancer Center (L.M.), New York University School of Medicine, 160 E 34th St, 3rd Floor, New York, NY 10016; Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Nijmegen, the Netherlands (R.M.M.); and Department of Radiology, the Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands (R.M.M.)
| | - Linda Moy
- From the Center for Biomedical Imaging (K.J.G., L.M.), Center for Data Science (K.J.G.), Center for Advanced Imaging Innovation and Research (L.M.), and Laura and Isaac Perlmutter Cancer Center (L.M.), New York University School of Medicine, 160 E 34th St, 3rd Floor, New York, NY 10016; Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Nijmegen, the Netherlands (R.M.M.); and Department of Radiology, the Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands (R.M.M.)
| |
Collapse
|
32
|
Jiang J, Zhang Y, Lu Y, Guo Y, Chen H. A Radiomic feature-based Nipple Detection Algorithm on Digital Mammography. Med Phys 2019; 46:4381-4391. [PMID: 31242321 DOI: 10.1002/mp.13684] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Revised: 06/13/2019] [Accepted: 06/13/2019] [Indexed: 11/06/2022] Open
Abstract
PURPOSE In the diagnosis and detection of breast lesions, the nipple is an important anatomical landmark which can be used for the registration on multiview mammograms. In this study, we propose a new detection algorithm for nipples on digital mammography (DM) by applying pixel classification based on geometric and radiomic features extracted from breast boundary regions. METHODS The imaging characteristics of nipples are closely related to the visibility on mammograms. To locate the nipple on mammogram, a searching area is first determined based on the breast boundary and chest wall orientation. Two different approaches are developed for obvious and subtle nipples, respectively. For obvious nipples, top hat transformation is employed to detect the nipple region, whose geometric center is regarded as the nipple position. For subtle nipples, the curved searching area near the breast boundary is mapped onto a Cartesian plane through a revised rubber band straightening transformation. On the straightened searching area, the geometric and radiomic features are calculated along the normal direction of the breast boundary, and a random forest classifier is trained for subtle nipple localization. RESULTS Seven hundred and twenty-one DMs were collected for the evaluation of the proposed algorithm. The locations of nipples are manually identified by an experienced radiologist as the reference standard. The average Euclidean distance between the computed nipple position and the reference standard was 2.69 mm (obvious) and 7.81 mm (subtle), respectively. A total of 97.61% of the obvious nipples (613/628) and 88.17% of the subtle nipples (82/93) were detected within a 10-mm radius centered from the reference standard. CONCLUSIONS The evaluation results show that the proposed method is effective for nipple detection on DM, especially for subtle nipple detection.
Collapse
Affiliation(s)
- Jiayu Jiang
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou, 510006, China.,Guangdong Province Key Laboratory of Computational Science, Guangzhou, 510006, China
| | - Yaqin Zhang
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University 519000
| | - Yao Lu
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou, 510006, China.,Guangdong Province Key Laboratory of Computational Science, Guangzhou, 510006, China
| | - Yanhui Guo
- Department of Computer Science, University of Illinois, Springfield, Illinois, 62703, USA
| | - Haibin Chen
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou, 510006, China
| |
Collapse
|
33
|
Abdelhafiz D, Yang C, Ammar R, Nabavi S. Deep convolutional neural networks for mammography: advances, challenges and applications. BMC Bioinformatics 2019; 20:281. [PMID: 31167642 PMCID: PMC6551243 DOI: 10.1186/s12859-019-2823-4] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND The limitations of traditional computer-aided detection (CAD) systems for mammography, the extreme importance of early detection of breast cancer and the high impact of the false diagnosis of patients drive researchers to investigate deep learning (DL) methods for mammograms (MGs). Recent breakthroughs in DL, in particular, convolutional neural networks (CNNs) have achieved remarkable advances in the medical fields. Specifically, CNNs are used in mammography for lesion localization and detection, risk assessment, image retrieval, and classification tasks. CNNs also help radiologists providing more accurate diagnosis by delivering precise quantitative analysis of suspicious lesions. RESULTS In this survey, we conducted a detailed review of the strengths, limitations, and performance of the most recent CNNs applications in analyzing MG images. It summarizes 83 research studies for applying CNNs on various tasks in mammography. It focuses on finding the best practices used in these research studies to improve the diagnosis accuracy. This survey also provides a deep insight into the architecture of CNNs used for various tasks. Furthermore, it describes the most common publicly available MG repositories and highlights their main features and strengths. CONCLUSIONS The mammography research community can utilize this survey as a basis for their current and future studies. The given comparison among common publicly available MG repositories guides the community to select the most appropriate database for their application(s). Moreover, this survey lists the best practices that improve the performance of CNNs including the pre-processing of images and the use of multi-view images. In addition, other listed techniques like transfer learning (TL), data augmentation, batch normalization, and dropout are appealing solutions to reduce overfitting and increase the generalization of the CNN models. Finally, this survey identifies the research challenges and directions that require further investigations by the community.
Collapse
Affiliation(s)
- Dina Abdelhafiz
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
- The Informatics Research Institute (IRI), City of Scientific Research and Technological Application (SRTA-City), New Borg El-Arab, Egypt
| | - Clifford Yang
- Department of Diagnostic Imaging, University of Connecticut Health Center, Farmington, 06030 CT USA
| | - Reda Ammar
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
| |
Collapse
|
34
|
Chiao JY, Chen KY, Liao KYK, Hsieh PH, Zhang G, Huang TC. Detection and classification the breast tumors using mask R-CNN on sonograms. Medicine (Baltimore) 2019; 98:e15200. [PMID: 31083152 PMCID: PMC6531264 DOI: 10.1097/md.0000000000015200] [Citation(s) in RCA: 72] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/11/2018] [Revised: 02/12/2019] [Accepted: 03/20/2019] [Indexed: 01/10/2023] Open
Abstract
Breast cancer is one of the most harmful diseases for women with the highest morbidity. An efficient way to decrease its mortality is to diagnose cancer earlier by screening. Clinically, the best approach of screening for Asian women is ultrasound images combined with biopsies. However, biopsy is invasive and it gets incomprehensive information of the lesion. The aim of this study is to build a model for automatic detection, segmentation, and classification of breast lesions with ultrasound images. Based on deep learning, a technique using Mask regions with convolutional neural network was developed for lesion detection and differentiation between benign and malignant. The mean average precision was 0.75 for the detection and segmentation. The overall accuracy of benign/malignant classification was 85%. The proposed method provides a comprehensive and noninvasive way to detect and classify breast lesions.
Collapse
Affiliation(s)
- Jui-Ying Chiao
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung
| | - Kuan-Yung Chen
- Department of Radiology, Chang Bing Show Chwan Memorial Hospital, Changhua
| | - Ken Ying-Kai Liao
- Artificial Intelligence Center, China Medical University Hospital, Taichung, Taiwan
| | - Po-Hsin Hsieh
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung
| | - Geoffrey Zhang
- Department of Radiation Oncology, Moffitt Cancer Center, Tampa, FL
| | - Tzung-Chi Huang
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung
- Artificial Intelligence Center, China Medical University Hospital, Taichung, Taiwan
- Department of Bioinformatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
35
|
A Review of the Role of Augmented Intelligence in Breast Imaging: From Automated Breast Density Assessment to Risk Stratification. AJR Am J Roentgenol 2019; 212:259-270. [DOI: 10.2214/ajr.18.20391] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
36
|
Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional Neural Networks for Radiologic Images: A Radiologist's Guide. Radiology 2019; 290:590-606. [PMID: 30694159 DOI: 10.1148/radiol.2018180547] [Citation(s) in RCA: 283] [Impact Index Per Article: 56.6] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deep learning has rapidly advanced in various fields within the past few years and has recently gained particular attention in the radiology community. This article provides an introduction to deep learning technology and presents the stages that are entailed in the design process of deep learning radiology research. In addition, the article details the results of a survey of the application of deep learning-specifically, the application of convolutional neural networks-to radiologic imaging that was focused on the following five major system organs: chest, breast, brain, musculoskeletal system, and abdomen and pelvis. The survey of the studies is followed by a discussion about current challenges and future trends and their potential implications for radiology. This article may be used as a guide for radiologists planning research in the field of radiologic image analysis using convolutional neural networks.
Collapse
Affiliation(s)
- Shelly Soffer
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Avi Ben-Cohen
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Orit Shimon
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Michal Marianne Amitai
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Hayit Greenspan
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Eyal Klang
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| |
Collapse
|
37
|
Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, Summers RM, Giger ML. Deep learning in medical imaging and radiation therapy. Med Phys 2019; 46:e1-e36. [PMID: 30367497 PMCID: PMC9560030 DOI: 10.1002/mp.13264] [Citation(s) in RCA: 379] [Impact Index Per Article: 75.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 09/18/2018] [Accepted: 10/09/2018] [Indexed: 12/15/2022] Open
Abstract
The goals of this review paper on deep learning (DL) in medical imaging and radiation therapy are to (a) summarize what has been achieved to date; (b) identify common and unique challenges, and strategies that researchers have taken to address these challenges; and (c) identify some of the promising avenues for the future both in terms of applications as well as technical innovations. We introduce the general principles of DL and convolutional neural networks, survey five major areas of application of DL in medical imaging and radiation therapy, identify common themes, discuss methods for dataset expansion, and conclude by summarizing lessons learned, remaining challenges, and future directions.
Collapse
Affiliation(s)
- Berkman Sahiner
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Aria Pezeshk
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | | | - Xiaosong Wang
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | - Karen Drukker
- Department of RadiologyUniversity of ChicagoChicagoIL60637USA
| | - Kenny H. Cha
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Ronald M. Summers
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | | |
Collapse
|