1
|
Luo L, Wang X, Lin Y, Ma X, Tan A, Chan R, Vardhanabhuti V, Chu WC, Cheng KT, Chen H. Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions. IEEE Rev Biomed Eng 2025; 18:130-151. [PMID: 38265911 DOI: 10.1109/rbme.2024.3357877] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020. Breast imaging plays a significant role in early diagnosis and intervention to improve the outcome of breast cancer patients. In the past decade, deep learning has shown remarkable progress in breast cancer imaging analysis, holding great promise in interpreting the rich information and complex context of breast imaging modalities. Considering the rapid improvement in deep learning technology and the increasing severity of breast cancer, it is critical to summarize past progress and identify future challenges to be addressed. This paper provides an extensive review of deep learning-based breast cancer imaging research, covering studies on mammograms, ultrasound, magnetic resonance imaging, and digital pathology images over the past decade. The major deep learning methods and applications on imaging-based screening, diagnosis, treatment response prediction, and prognosis are elaborated and discussed. Drawn from the findings of this survey, we present a comprehensive discussion of the challenges and potential avenues for future research in deep learning-based breast cancer imaging.
Collapse
|
2
|
AlJabri M, Alghamdi M, Collado-Mesa F, Abdel-Mottaleb M. Recurrent attention U-Net for segmentation and quantification of breast arterial calcifications on synthesized 2D mammograms. PeerJ Comput Sci 2024; 10:e2076. [PMID: 38855260 PMCID: PMC11157579 DOI: 10.7717/peerj-cs.2076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 04/30/2024] [Indexed: 06/11/2024]
Abstract
Breast arterial calcifications (BAC) are a type of calcification commonly observed on mammograms and are generally considered benign and not associated with breast cancer. However, there is accumulating observational evidence of an association between BAC and cardiovascular disease, the leading cause of death in women. We present a deep learning method that could assist radiologists in detecting and quantifying BAC in synthesized 2D mammograms. We present a recurrent attention U-Net model consisting of encoder and decoder modules that include multiple blocks that each use a recurrent mechanism, a recurrent mechanism, and an attention module between them. The model also includes a skip connection between the encoder and the decoder, similar to a U-shaped network. The attention module was used to enhance the capture of long-range dependencies and enable the network to effectively classify BAC from the background, whereas the recurrent blocks ensured better feature representation. The model was evaluated using a dataset containing 2,000 synthesized 2D mammogram images. We obtained 99.8861% overall accuracy, 69.6107% sensitivity, 66.5758% F-1 score, and 59.5498% Jaccard coefficient, respectively. The presented model achieved promising performance compared with related models.
Collapse
Affiliation(s)
- Manar AlJabri
- Department of Computer Science and Artificial Intelligence, Umm Al-Qura University, Makkah, Makkah, Saudi Arabia
- King Abdul Aziz University, Jeddah, Makkah, Saudi Arabia
| | - Manal Alghamdi
- Department of Computer Science and Artificial Intelligence, Umm Al-Qura University, Makkah, Makkah, Saudi Arabia
| | - Fernando Collado-Mesa
- Department of Radiology, Miller School of Medicine, University of Miami, Miami, Florida, United States
| | - Mohamed Abdel-Mottaleb
- Department of Electrical and Computer Engineering, University of Miami, Miami, Florida, United States
| |
Collapse
|
3
|
Yue P, Li Z, Zhou M, Wang X, Yang P. Wearable-Sensor-Based Weakly Supervised Parkinson's Disease Assessment with Data Augmentation. SENSORS (BASEL, SWITZERLAND) 2024; 24:1196. [PMID: 38400357 PMCID: PMC10892773 DOI: 10.3390/s24041196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 01/23/2024] [Accepted: 01/30/2024] [Indexed: 02/25/2024]
Abstract
Parkinson's disease (PD) is the second most prevalent dementia in the world. Wearable technology has been useful in the computer-aided diagnosis and long-term monitoring of PD in recent years. The fundamental issue remains how to assess the severity of PD using wearable devices in an efficient and accurate manner. However, in the real-world free-living environment, there are two difficult issues, poor annotation and class imbalance, both of which could potentially impede the automatic assessment of PD. To address these challenges, we propose a novel framework for assessing the severity of PD patient's in a free-living environment. Specifically, we use clustering methods to learn latent categories from the same activities, while latent Dirichlet allocation (LDA) topic models are utilized to capture latent features from multiple activities. Then, to mitigate the impact of data imbalance, we augment bag-level data while retaining key instance prototypes. To comprehensively demonstrate the efficacy of our proposed framework, we collected a dataset containing wearable-sensor signals from 83 individuals in real-life free-living conditions. The experimental results show that our framework achieves an astounding 73.48% accuracy in the fine-grained (normal, mild, moderate, severe) classification of PD severity based on hand movements. Overall, this study contributes to more accurate PD self-diagnosis in the wild, allowing doctors to provide remote drug intervention guidance.
Collapse
Affiliation(s)
- Peng Yue
- Department of Computer Science, University of Sheffield, Sheffield S10 2TN, UK; (P.Y.); (M.Z.); (X.W.)
- AntData Ltd., Liverpool L16 2AE, UK
| | - Ziheng Li
- Department of Software, Yunnan University, Kunming 650106, China;
| | - Menghui Zhou
- Department of Computer Science, University of Sheffield, Sheffield S10 2TN, UK; (P.Y.); (M.Z.); (X.W.)
| | - Xulong Wang
- Department of Computer Science, University of Sheffield, Sheffield S10 2TN, UK; (P.Y.); (M.Z.); (X.W.)
| | - Po Yang
- Department of Computer Science, University of Sheffield, Sheffield S10 2TN, UK; (P.Y.); (M.Z.); (X.W.)
| |
Collapse
|
4
|
Hussain S, Lafarga-Osuna Y, Ali M, Naseem U, Ahmed M, Tamez-Peña JG. Deep learning, radiomics and radiogenomics applications in the digital breast tomosynthesis: a systematic review. BMC Bioinformatics 2023; 24:401. [PMID: 37884877 PMCID: PMC10605943 DOI: 10.1186/s12859-023-05515-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 10/02/2023] [Indexed: 10/28/2023] Open
Abstract
BACKGROUND Recent advancements in computing power and state-of-the-art algorithms have helped in more accessible and accurate diagnosis of numerous diseases. In addition, the development of de novo areas in imaging science, such as radiomics and radiogenomics, have been adding more to personalize healthcare to stratify patients better. These techniques associate imaging phenotypes with the related disease genes. Various imaging modalities have been used for years to diagnose breast cancer. Nonetheless, digital breast tomosynthesis (DBT), a state-of-the-art technique, has produced promising results comparatively. DBT, a 3D mammography, is replacing conventional 2D mammography rapidly. This technological advancement is key to AI algorithms for accurately interpreting medical images. OBJECTIVE AND METHODS This paper presents a comprehensive review of deep learning (DL), radiomics and radiogenomics in breast image analysis. This review focuses on DBT, its extracted synthetic mammography (SM), and full-field digital mammography (FFDM). Furthermore, this survey provides systematic knowledge about DL, radiomics, and radiogenomics for beginners and advanced-level researchers. RESULTS A total of 500 articles were identified, with 30 studies included as the set criteria. Parallel benchmarking of radiomics, radiogenomics, and DL models applied to the DBT images could allow clinicians and researchers alike to have greater awareness as they consider clinical deployment or development of new models. This review provides a comprehensive guide to understanding the current state of early breast cancer detection using DBT images. CONCLUSION Using this survey, investigators with various backgrounds can easily seek interdisciplinary science and new DL, radiomics, and radiogenomics directions towards DBT.
Collapse
Affiliation(s)
- Sadam Hussain
- School of Engineering and Sciences, Tecnológico de Monterrey, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, Mexico.
| | - Yareth Lafarga-Osuna
- School of Engineering and Sciences, Tecnológico de Monterrey, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, Mexico
| | - Mansoor Ali
- School of Engineering and Sciences, Tecnológico de Monterrey, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, Mexico
| | - Usman Naseem
- College of Science and Engineering, James Cook University, Cairns, Australia
| | - Masroor Ahmed
- School of Engineering and Sciences, Tecnológico de Monterrey, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, Mexico
| | - Jose Gerardo Tamez-Peña
- School of Medicine and Health Sciences, Tecnológico de Monterrey, Ave. Eugenio Garza Sada 2501, 64849, Monterrey, Mexico
| |
Collapse
|
5
|
Ren Y, Liu X, Ge J, Liang Z, Xu X, Grimm LJ, Go J, Marks JR, Lo JY. Ipsilateral Lesion Detection Refinement for Tomosynthesis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3080-3090. [PMID: 37227903 PMCID: PMC11033619 DOI: 10.1109/tmi.2023.3280135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Computer-aided detection (CAD) frameworks for breast cancer screening have been researched for several decades. Early adoption of deep-learning models in CAD frameworks has shown greatly improved detection performance compared to traditional CAD on single-view images. Recently, studies have improved performance by merging information from multiple views within each screening exam. Clinically, the integration of lesion correspondence during screening is a complicated decision process that depends on the correct execution of several referencing steps. However, most multi-view CAD frameworks are deep-learning-based black-box techniques. Fully end-to-end designs make it very difficult to analyze model behaviors and fine-tune performance. More importantly, the black-box nature of the techniques discourages clinical adoption due to the lack of explicit reasoning for each multi-view referencing step. Therefore, there is a need for a multi-view detection framework that can not only detect cancers accurately but also provide step-by-step, multi-view reasoning. In this work, we present Ipsilateral-Matching-Refinement Networks (IMR-Net) for digital breast tomosynthesis (DBT) lesion detection across multiple views. Our proposed framework adaptively refines the single-view detection scores based on explicit ipsilateral lesion matching. IMR-Net is built on a robust, single-view detection CAD pipeline with a commercial development DBT dataset of 24675 DBT volumetric views from 8034 exams. Performance is measured using location-based, case-level receiver operating characteristic (ROC) and case-level free-response ROC (FROC) analysis.
Collapse
|
6
|
Chen X, Wang X, Lv J, Qin G, Zhou Z. An integrated network based on 2D/3D feature correlations for benign-malignant tumor classification and uncertainty estimation in digital breast tomosynthesis. Phys Med Biol 2023; 68:175046. [PMID: 37582379 DOI: 10.1088/1361-6560/acf092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 08/15/2023] [Indexed: 08/17/2023]
Abstract
Objective.Classification of benign and malignant tumors is important for the early diagnosis of breast cancer. Over the last decade, digital breast tomosynthesis (DBT) has gradually become an effective imaging modality for breast cancer diagnosis due to its ability to generate three-dimensional (3D) visualizations. However, computer-aided diagnosis (CAD) systems based on 3D images require high computational costs and time. Furthermore, there is considerable redundant information in 3D images. Most CAD systems are designed based on 2D images, which may lose the spatial depth information of tumors. In this study, we propose a 2D/3D integrated network for the diagnosis of benign and malignant breast tumors.Approach.We introduce a correlation strategy to describe feature correlations between slices in 3D volumes, corresponding to the tissue relationship and spatial depth features of tumors. The correlation strategy can be used to extract spatial features with little computational cost. In the prediction stage, 3D spatial correlation features and 2D features are both used for classification.Main results.Experimental results demonstrate that our proposed framework achieves higher accuracy and reliability than pure 2D or 3D models. Our framework has a high area under the curve of 0.88 and accuracy of 0.82. The parameter size of the feature extractor in our framework is only 35% of that of the 3D models. In reliability evaluations, our proposed model is more reliable than pure 2D or 3D models because of its effective and nonredundant features.Significance.This study successfully combines 3D spatial correlation features and 2D features for the diagnosis of benign and malignant breast tumors in DBT. In addition to high accuracy and low computational cost, our model is more reliable and can output uncertainty value. From this point of view, the proposed method has the potential to be applied in clinic.
Collapse
Affiliation(s)
- Xi Chen
- School of Information and Communications Engineering, Xi'an Jiaotong University, Xi'an, 710049, Shaanxi, People's Republic of China
| | - Xiaoyu Wang
- School of Information and Communications Engineering, Xi'an Jiaotong University, Xi'an, 710049, Shaanxi, People's Republic of China
| | - Jiahuan Lv
- School of Information and Communications Engineering, Xi'an Jiaotong University, Xi'an, 710049, Shaanxi, People's Republic of China
| | - Genggeng Qin
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, Guangdong, People's Republic of China
| | - Zhiguo Zhou
- Department of Biostatistics and Data Science, University of Kansas Medical Center, Kansas City, KS-66160, United States of America
| |
Collapse
|
7
|
Kim S, Lee E. A deep attention LSTM embedded aggregation network for multiple histopathological images. PLoS One 2023; 18:e0287301. [PMID: 37384648 PMCID: PMC10310006 DOI: 10.1371/journal.pone.0287301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 06/03/2023] [Indexed: 07/01/2023] Open
Abstract
Recent advancements in computer vision and neural networks have facilitated the medical imaging survival analysis for various medical applications. However, challenges arise when patients have multiple images from multiple lesions, as current deep learning methods provide multiple survival predictions for each patient, complicating result interpretation. To address this issue, we developed a deep learning survival model that can provide accurate predictions at the patient level. We propose a deep attention long short-term memory embedded aggregation network (DALAN) for histopathology images, designed to simultaneously perform feature extraction and aggregation of lesion images. This design enables the model to efficiently learn imaging features from lesions and aggregate lesion-level information to the patient level. DALAN comprises a weight-shared CNN, attention layers, and LSTM layers. The attention layer calculates the significance of each lesion image, while the LSTM layer combines the weighted information to produce an all-encompassing representation of the patient's lesion data. Our proposed method performed better on both simulated and real data than other competing methods in terms of prediction accuracy. We evaluated DALAN against several naive aggregation methods on simulated and real datasets. Our results showed that DALAN outperformed the competing methods in terms of c-index on the MNIST and Cancer dataset simulations. On the real TCGA dataset, DALAN also achieved a higher c-index of 0.803±0.006 compared to the naive methods and the competing models. Our DALAN effectively aggregates multiple histopathology images, demonstrating a comprehensive survival model using attention and LSTM mechanisms.
Collapse
Affiliation(s)
- Sunghun Kim
- Department of Information and Statistics, Chungnam National University, Daejeon, Republic of Korea
- Department of Artificial Intelligence, Sungkyunkwan University, Suwon, Republic of Korea
| | - Eunjee Lee
- Department of Information and Statistics, Chungnam National University, Daejeon, Republic of Korea
| |
Collapse
|
8
|
Hu H, Ye R, Thiyagalingam J, Coenen F, Su J. Triple-kernel gated attention-based multiple instance learning with contrastive learning for medical image analysis. APPL INTELL 2023; 53:1-16. [PMID: 37363384 PMCID: PMC10072016 DOI: 10.1007/s10489-023-04458-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/05/2023] [Indexed: 04/07/2023]
Abstract
In machine learning, multiple instance learning is a method evolved from supervised learning algorithms, which defines a "bag" as a collection of multiple examples with a wide range of applications. In this paper, we propose a novel deep multiple instance learning model for medical image analysis, called triple-kernel gated attention-based multiple instance learning with contrastive learning. It can be used to overcome the limitations of the existing multiple instance learning approaches to medical image analysis. Our model consists of four steps. i) Extracting the representations by a simple convolutional neural network using contrastive learning for training. ii) Using three different kernel functions to obtain the importance of each instance from the entire image and forming an attention map. iii) Based on the attention map, aggregating the entire image together by attention-based MIL pooling. iv) Feeding the results into the classifier for prediction. The results on different datasets demonstrate that the proposed model outperforms state-of-the-art methods on binary and weakly supervised classification tasks. It can provide more efficient classification results for various disease models and additional explanatory information.
Collapse
Affiliation(s)
- Huafeng Hu
- Department of Electrical and Electronic Engineering, University of Liverpool based at Xi’an Jiaotong-Liverpool University, Suzhou, 215123 Jiangsu China
| | - Ruijie Ye
- Department of Computer Science, University of Liverpool, Liverpool, L69 3BX UK
| | - Jeyan Thiyagalingam
- Scientific Computing Department, Rutherford Appleton Laboratory, Science and Technology Facilities Council, Harwell Campus, Didcot, OX11 0QX UK
| | - Frans Coenen
- Department of Computer Science, University of Liverpool, Liverpool, L69 3BX UK
| | - Jionglong Su
- School of AI and Advanced Computing, XJTLU Entrepreneur College (Taicang), Xi’an Jiaotong-Liverpool University, Suzhou, 215123 Jiangsu China
| |
Collapse
|
9
|
Zizaan A, Idri A. Machine learning based Breast Cancer screening: trends, challenges, and opportunities. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2023. [DOI: 10.1080/21681163.2023.2172615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
Affiliation(s)
- Asma Zizaan
- Mohammed VI Polytechnic University, Benguerir, Morocco
| | - Ali Idri
- Mohammed VI Polytechnic University, Benguerir, Morocco
- Software Project Management Research Team, ENSIAS, Mohammed V University, Rabat, Morocco
| |
Collapse
|
10
|
Konz N, Buda M, Gu H, Saha A, Yang J, Chłędowski J, Park J, Witowski J, Geras KJ, Shoshan Y, Gilboa-Solomon F, Khapun D, Ratner V, Barkan E, Ozery-Flato M, Martí R, Omigbodun A, Marasinou C, Nakhaei N, Hsu W, Sahu P, Hossain MB, Lee J, Santos C, Przelaskowski A, Kalpathy-Cramer J, Bearce B, Cha K, Farahani K, Petrick N, Hadjiiski L, Drukker K, Armato SG, Mazurowski MA. A Competition, Benchmark, Code, and Data for Using Artificial Intelligence to Detect Lesions in Digital Breast Tomosynthesis. JAMA Netw Open 2023; 6:e230524. [PMID: 36821110 PMCID: PMC9951043 DOI: 10.1001/jamanetworkopen.2023.0524] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/24/2023] Open
Abstract
IMPORTANCE An accurate and robust artificial intelligence (AI) algorithm for detecting cancer in digital breast tomosynthesis (DBT) could significantly improve detection accuracy and reduce health care costs worldwide. OBJECTIVES To make training and evaluation data for the development of AI algorithms for DBT analysis available, to develop well-defined benchmarks, and to create publicly available code for existing methods. DESIGN, SETTING, AND PARTICIPANTS This diagnostic study is based on a multi-institutional international grand challenge in which research teams developed algorithms to detect lesions in DBT. A data set of 22 032 reconstructed DBT volumes was made available to research teams. Phase 1, in which teams were provided 700 scans from the training set, 120 from the validation set, and 180 from the test set, took place from December 2020 to January 2021, and phase 2, in which teams were given the full data set, took place from May to July 2021. MAIN OUTCOMES AND MEASURES The overall performance was evaluated by mean sensitivity for biopsied lesions using only DBT volumes with biopsied lesions; ties were broken by including all DBT volumes. RESULTS A total of 8 teams participated in the challenge. The team with the highest mean sensitivity for biopsied lesions was the NYU B-Team, with 0.957 (95% CI, 0.924-0.984), and the second-place team, ZeDuS, had a mean sensitivity of 0.926 (95% CI, 0.881-0.964). When the results were aggregated, the mean sensitivity for all submitted algorithms was 0.879; for only those who participated in phase 2, it was 0.926. CONCLUSIONS AND RELEVANCE In this diagnostic study, an international competition produced algorithms with high sensitivity for using AI to detect lesions on DBT images. A standardized performance benchmark for the detection task using publicly available clinical imaging data was released, with detailed descriptions and analyses of submitted algorithms accompanied by a public release of their predictions and code for selected methods. These resources will serve as a foundation for future research on computer-assisted diagnosis methods for DBT, significantly lowering the barrier of entry for new researchers.
Collapse
Affiliation(s)
- Nicholas Konz
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
| | - Mateusz Buda
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
- Faculty of Mathematics and Information Science, Warsaw University of Technology, Warsaw, Poland
| | - Hanxue Gu
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
| | - Ashirbani Saha
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
- Department of Oncology, McMaster University, Hamilton, Ontario, Canada
| | | | - Jakub Chłędowski
- Jagiellonian University, Kraków, Poland
- Department of Radiology, NYU Grossman School of Medicine, New York, New York
| | - Jungkyu Park
- Department of Radiology, NYU Grossman School of Medicine, New York, New York
| | - Jan Witowski
- Department of Radiology, NYU Grossman School of Medicine, New York, New York
| | - Krzysztof J. Geras
- Department of Radiology, NYU Grossman School of Medicine, New York, New York
| | - Yoel Shoshan
- Medical Image Analytics, IBM Research, Haifa, Israel
| | | | - Daniel Khapun
- Medical Image Analytics, IBM Research, Haifa, Israel
| | - Vadim Ratner
- Medical Image Analytics, IBM Research, Haifa, Israel
| | - Ella Barkan
- Medical Image Analytics, IBM Research, Haifa, Israel
| | | | - Robert Martí
- Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
| | - Akinyinka Omigbodun
- Medical and Imaging Informatics Group, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles
| | - Chrysostomos Marasinou
- Medical and Imaging Informatics Group, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles
| | - Noor Nakhaei
- Medical and Imaging Informatics Group, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles
| | - William Hsu
- Medical and Imaging Informatics Group, Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles
- Department of Bioengineering, University of California Los Angeles Samueli School of Engineering
| | - Pranjal Sahu
- Department of Computer Science, Stony Brook University, Stony Brook, New York
| | - Md Belayat Hossain
- Department of Radiology, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Juhun Lee
- Department of Radiology, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Carlos Santos
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
| | - Artur Przelaskowski
- Faculty of Mathematics and Information Science, Warsaw University of Technology, Warsaw, Poland
| | - Jayashree Kalpathy-Cramer
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown
| | - Benjamin Bearce
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown
| | - Kenny Cha
- US Food and Drug Administration, Silver Spring, Maryland
| | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute, Bethesda, Maryland
| | | | | | - Karen Drukker
- Department of Radiology, University of Chicago, Chicago, Illinois
| | - Samuel G. Armato
- Department of Radiology, University of Chicago, Chicago, Illinois
| | - Maciej A. Mazurowski
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
- Department of Radiology, Duke University Medical Center, Durham, North Carolina
- Department of Computer Science, Duke University, Durham, North Carolina
- Department of Biostatistics and Bioinformatics, Duke University Medical Center, Durham, North Carolina
| |
Collapse
|
11
|
Schmidt HG, Mamede S. Improving diagnostic decision support through deliberate reflection: a proposal. Diagnosis (Berl) 2023; 10:38-42. [PMID: 36000188 DOI: 10.1515/dx-2022-0062] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Accepted: 07/25/2022] [Indexed: 11/15/2022]
Abstract
Digital decision support (DDS) is expected to play an important role in improving a physician's diagnostic performance and reducing the burden of diagnostic error. Studies with currently available DDS systems indicate that they lead to modest gains in diagnostic accuracy, and these systems are expected to evolve to become more effective and user-friendly in the future. In this position paper, we propose that a way towards this future is to rethink DDS systems based on deliberate reflection, a strategy by which physicians systematically review the clinical findings observed in a patient in the light of an initial diagnosis. Deliberate reflection has been demonstrated to improve diagnostic accuracy in several contexts. In this paper, we first describe the deliberate reflection strategy, including the crucial element that would make it useful in the interaction with a DDS system. We examine the nature of conventional DDS systems and their shortcomings. Finally, we propose what DDS based on deliberate reflection might look like, and consider why it would overcome downsides of conventional DDS.
Collapse
Affiliation(s)
- Henk G Schmidt
- Department of Psychology, Education and Child Studies, Erasmus University Rotterdam, Rotterdam, The Netherlands.,Institute of Medical Education Research Rotterdam, Erasmus Medical Center, Rotterdam, The Netherlands
| | - Sílvia Mamede
- Department of Psychology, Education and Child Studies, Erasmus University Rotterdam, Rotterdam, The Netherlands.,Institute of Medical Education Research Rotterdam, Erasmus Medical Center, Rotterdam, The Netherlands
| |
Collapse
|
12
|
Krishnapriya S, Karuna Y. Pre-trained deep learning models for brain MRI image classification. Front Hum Neurosci 2023; 17:1150120. [PMID: 37151901 PMCID: PMC10157370 DOI: 10.3389/fnhum.2023.1150120] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 03/06/2023] [Indexed: 05/09/2023] Open
Abstract
Brain tumors are serious conditions caused by uncontrolled and abnormal cell division. Tumors can have devastating implications if not accurately and promptly detected. Magnetic resonance imaging (MRI) is one of the methods frequently used to detect brain tumors owing to its excellent resolution. In the past few decades, substantial research has been conducted in the field of classifying brain images, ranging from traditional methods to deep-learning techniques such as convolutional neural networks (CNN). To accomplish classification, machine-learning methods require manually created features. In contrast, CNN achieves classification by extracting visual features from unprocessed images. The size of the training dataset had a significant impact on the features that CNN extracts. The CNN tends to overfit when its size is small. Deep CNNs (DCNN) with transfer learning have therefore been developed. The aim of this work was to investigate the brain MR image categorization potential of pre-trained DCNN VGG-19, VGG-16, ResNet50, and Inception V3 models using data augmentation and transfer learning techniques. Validation of the test set utilizing accuracy, recall, Precision, and F1 score showed that the pre-trained VGG-19 model with transfer learning exhibited the best performance. In addition, these methods offer an end-to-end classification of raw images without the need for manual attribute extraction.
Collapse
|
13
|
Khaliliboroujeni S, He X, Jia W, Amirgholipour S. End-to-end metastasis detection of breast cancer from histopathology whole slide images. Comput Med Imaging Graph 2022; 102:102136. [PMID: 36375284 DOI: 10.1016/j.compmedimag.2022.102136] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 07/16/2022] [Accepted: 10/07/2022] [Indexed: 11/13/2022]
Abstract
Worldwide breast cancer is one of the most frequent and mortal diseases across women. Early, accurate metastasis cancer detection is a significant factor in raising the survival rate among patients. Diverse Computer-Aided Diagnostic (CAD) systems applying medical imaging modalities, have been designed for breast cancer detection. The impact of deep learning in improving CAD systems' performance is undeniable. Among all of the medical image modalities, histopathology (HP) images consist of richer phenotypic details and help keep track of cancer metastasis. Nonetheless, metastasis detection in whole slide images (WSIs) is still problematic because of the enormous size of these images and the massive cost of labelling them. In this paper, we develop a reliable, fast and accurate CAD system for metastasis detection in breast cancer while applying only a small amount of annotated data with lower resolution. This saves considerable time and cost. Unlike other works which apply patch classification for tumor detection, we employ the benefits of attention modules adding to regression and classification, to extract tumor parts simultaneously. Then, we use dense prediction for mask generation and identify individual metastases in WSIs. Experimental outcomes demonstrate the efficiency of our method. It provides more accurate results than other methods that apply the total dataset. The proposed method is about seven times faster than an expert pathologist, while producing even more accurate results than an expert pathologist in tumor detection.
Collapse
Affiliation(s)
- Sepideh Khaliliboroujeni
- School of Electrical and Data Engineering, University of Technology, Sydney, NSW 2007, Australia.
| | - Xiangjian He
- School of Computer Science, University of Nottingham Ningbo China, Ningbo, China.
| | - Wenjing Jia
- School of Electrical and Data Engineering, University of Technology, Sydney, NSW 2007, Australia.
| | - Saeed Amirgholipour
- Data Science and AI Elite, Client Engineering, Global Sales (WW), IBM, Sydney, Australia.
| |
Collapse
|
14
|
Automatic Classification of Simulated Breast Tomosynthesis Whole Images for the Presence of Microcalcification Clusters Using Deep CNNs. J Imaging 2022; 8:jimaging8090231. [PMID: 36135397 PMCID: PMC9503015 DOI: 10.3390/jimaging8090231] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 07/26/2022] [Accepted: 08/04/2022] [Indexed: 11/30/2022] Open
Abstract
Microcalcification clusters (MCs) are among the most important biomarkers for breast cancer, especially in cases of nonpalpable lesions. The vast majority of deep learning studies on digital breast tomosynthesis (DBT) are focused on detecting and classifying lesions, especially soft-tissue lesions, in small regions of interest previously selected. Only about 25% of the studies are specific to MCs, and all of them are based on the classification of small preselected regions. Classifying the whole image according to the presence or absence of MCs is a difficult task due to the size of MCs and all the information present in an entire image. A completely automatic and direct classification, which receives the entire image, without prior identification of any regions, is crucial for the usefulness of these techniques in a real clinical and screening environment. The main purpose of this work is to implement and evaluate the performance of convolutional neural networks (CNNs) regarding an automatic classification of a complete DBT image for the presence or absence of MCs (without any prior identification of regions). In this work, four popular deep CNNs are trained and compared with a new architecture proposed by us. The main task of these trainings was the classification of DBT cases by absence or presence of MCs. A public database of realistic simulated data was used, and the whole DBT image was taken into account as input. DBT data were considered without and with preprocessing (to study the impact of noise reduction and contrast enhancement methods on the evaluation of MCs with CNNs). The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance. Very promising results were achieved with a maximum AUC of 94.19% for the GoogLeNet. The second-best AUC value was obtained with a new implemented network, CNN-a, with 91.17%. This CNN had the particularity of also being the fastest, thus becoming a very interesting model to be considered in other studies. With this work, encouraging outcomes were achieved in this regard, obtaining similar results to other studies for the detection of larger lesions such as masses. Moreover, given the difficulty of visualizing the MCs, which are often spread over several slices, this work may have an important impact on the clinical analysis of DBT images.
Collapse
|
15
|
Basurto-Hurtado JA, Cruz-Albarran IA, Toledano-Ayala M, Ibarra-Manzano MA, Morales-Hernandez LA, Perez-Ramirez CA. Diagnostic Strategies for Breast Cancer Detection: From Image Generation to Classification Strategies Using Artificial Intelligence Algorithms. Cancers (Basel) 2022; 14:3442. [PMID: 35884503 PMCID: PMC9322973 DOI: 10.3390/cancers14143442] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Revised: 07/02/2022] [Accepted: 07/12/2022] [Indexed: 02/04/2023] Open
Abstract
Breast cancer is one the main death causes for women worldwide, as 16% of the diagnosed malignant lesions worldwide are its consequence. In this sense, it is of paramount importance to diagnose these lesions in the earliest stage possible, in order to have the highest chances of survival. While there are several works that present selected topics in this area, none of them present a complete panorama, that is, from the image generation to its interpretation. This work presents a comprehensive state-of-the-art review of the image generation and processing techniques to detect Breast Cancer, where potential candidates for the image generation and processing are presented and discussed. Novel methodologies should consider the adroit integration of artificial intelligence-concepts and the categorical data to generate modern alternatives that can have the accuracy, precision and reliability expected to mitigate the misclassifications.
Collapse
Affiliation(s)
- Jesus A. Basurto-Hurtado
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| | - Irving A. Cruz-Albarran
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| | - Manuel Toledano-Ayala
- División de Investigación y Posgrado de la Facultad de Ingeniería (DIPFI), Universidad Autónoma de Querétaro, Cerro de las Campanas S/N Las Campanas, Santiago de Querétaro 76010, Mexico;
| | - Mario Alberto Ibarra-Manzano
- Laboratorio de Procesamiento Digital de Señales, Departamento de Ingeniería Electrónica, Division de Ingenierias Campus Irapuato-Salamanca (DICIS), Universidad de Guanajuato, Carretera Salamanca-Valle de Santiago KM. 3.5 + 1.8 Km., Salamanca 36885, Mexico;
| | - Luis A. Morales-Hernandez
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
| | - Carlos A. Perez-Ramirez
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| |
Collapse
|
16
|
Hanis TM, Islam MA, Musa KI. Diagnostic Accuracy of Machine Learning Models on Mammography in Breast Cancer Classification: A Meta-Analysis. Diagnostics (Basel) 2022; 12:1643. [PMID: 35885548 PMCID: PMC9320089 DOI: 10.3390/diagnostics12071643] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 06/29/2022] [Accepted: 06/29/2022] [Indexed: 11/16/2022] Open
Abstract
In this meta-analysis, we aimed to estimate the diagnostic accuracy of machine learning models on digital mammograms and tomosynthesis in breast cancer classification and to assess the factors affecting its diagnostic accuracy. We searched for related studies in Web of Science, Scopus, PubMed, Google Scholar and Embase. The studies were screened in two stages to exclude the unrelated studies and duplicates. Finally, 36 studies containing 68 machine learning models were included in this meta-analysis. The area under the curve (AUC), hierarchical summary receiver operating characteristics (HSROC) curve, pooled sensitivity and pooled specificity were estimated using a bivariate Reitsma model. Overall AUC, pooled sensitivity and pooled specificity were 0.90 (95% CI: 0.85-0.90), 0.83 (95% CI: 0.78-0.87) and 0.84 (95% CI: 0.81-0.87), respectively. Additionally, the three significant covariates identified in this study were country (p = 0.003), source (p = 0.002) and classifier (p = 0.016). The type of data covariate was not statistically significant (p = 0.121). Additionally, Deeks' linear regression test indicated that there exists a publication bias in the included studies (p = 0.002). Thus, the results should be interpreted with caution.
Collapse
Affiliation(s)
- Tengku Muhammad Hanis
- Department of Community Medicine, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia;
| | - Md Asiful Islam
- Department of Haematology, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
- Institute of Metabolism and Systems Research, University of Birmingham, Birmingham B15 2TT, UK
| | - Kamarul Imran Musa
- Department of Community Medicine, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia;
| |
Collapse
|
17
|
Ding W, Wang J, Zhou W, Zhou S, Chang C, Shi J. Joint Localization and Classification of Breast Cancer in B-Mode Ultrasound Imaging via Collaborative Learning with Elastography. IEEE J Biomed Health Inform 2022; 26:4474-4485. [PMID: 35763467 DOI: 10.1109/jbhi.2022.3186933] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
Convolutional neural networks (CNNs) have been successfully applied in the computer-aided ultrasound diagnosis for breast cancer. Up to now, several CNN-based methods have been proposed. However, most of them consider tumor localization and classification as two separate steps, rather than performing them simultaneously. Besides, they suffer from the limited diagnosis information in the B-mode ultrasound (BUS) images. In this study, we develop a novel network ResNet-GAP that incorporates both localization and classification into a unified procedure. To enhance the performance of ResNet-GAP, we leverage stiffness information in the elastography ultrasound (EUS) modality by collaborative learning in the training stage. Specifically, a dual-channel ResNet-GAP is developed, one channel for BUS and the other for EUS. In each channel, multiple class activity maps (CAMs) are generated using a series of convolutional kernels of different sizes. The multi-scale consistency of the CAMs in both channels are further considered in network optimization. Experiments on 264 patients in this study show that the newly developed ResNet-GAP achieves an accuracy of 88.6%, a sensitivity of 95.3%, a specificity of 84.6%, and an AUC of 93.6% on the classification task, and a 1.0NLF of 87.9% on the localization task, which is better than some state-of-the-art approaches.
Collapse
|
18
|
Tiwari P, Pant B, Elarabawy MM, Abd-Elnaby M, Mohd N, Dhiman G, Sharma S. CNN Based Multiclass Brain Tumor Detection Using Medical Imaging. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1830010. [PMID: 35774437 PMCID: PMC9239800 DOI: 10.1155/2022/1830010] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 05/23/2022] [Accepted: 05/30/2022] [Indexed: 02/08/2023]
Abstract
Brain tumors are the 10th leading reason for the death which is common among the adults and children. On the basis of texture, region, and shape there exists various types of tumor, and each one has the chances of survival very low. The wrong classification can lead to the worse consequences. As a result, these had to be properly divided into the many classes or grades, which is where multiclass classification comes into play. Magnetic resonance imaging (MRI) pictures are the most acceptable manner or method for representing the human brain for identifying the various tumors. Recent developments in image classification technology have made great strides, and the most popular and better approach that has been considered best in this area is CNN, and therefore, CNN is used for the brain tumor classification issue in this paper. The proposed model was successfully able to classify the brain image into four different classes, namely, no tumor indicating the given MRI of the brain does not have the tumor, glioma, meningioma, and pituitary tumor. This model produces an accuracy of 99%.
Collapse
Affiliation(s)
- Pallavi Tiwari
- Department of Computer Science and Engineering, Graphic Era Deemed to be University, Dehradun, India
| | - Bhaskar Pant
- Department of Computer Science and Engineering, Graphic Era Deemed to be University, Dehradun, India
| | - Mahmoud M. Elarabawy
- Department of Mathematics, Faculty of Science, Suez Canal University, Ismailia 41522, Egypt
| | - Mohammed Abd-Elnaby
- Department of Computer Engineering, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| | - Noor Mohd
- Department of Computer Science and Engineering, Graphic Era Deemed to be University, Dehradun, India
| | - Gaurav Dhiman
- Department of Computer Science and Engineering, Graphic Era Deemed to be University, Dehradun, India
| | | |
Collapse
|
19
|
Sreenivasu SVN, Gomathi S, Kumar MJ, Prathap L, Madduri A, Almutairi KMA, Alonazi WB, Kali D, Jayadhas SA. Dense Convolutional Neural Network for Detection of Cancer from CT Images. BIOMED RESEARCH INTERNATIONAL 2022; 2022:1293548. [PMID: 35769667 PMCID: PMC9236787 DOI: 10.1155/2022/1293548] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 04/17/2022] [Accepted: 04/23/2022] [Indexed: 11/17/2022]
Abstract
In this paper, we develop a detection module with strong training testing to develop a dense convolutional neural network model. The model is designed in such a way that it is trained with necessary features for optimal modelling of the cancer detection. The method involves preprocessing of computerized tomography (CT) images for optimal classification at the testing stages. A 10-fold cross-validation is conducted to test the reliability of the model for cancer detection. The experimental validation is conducted in python to validate the effectiveness of the model. The result shows that the model offers robust detection of cancer instances that novel approaches on large image datasets. The simulation result shows that the proposed method provides analyzes with 94% accuracy than other methods. Also, it helps to reduce the detection errors while classifying the cancer instances than other methods the several existing methods.
Collapse
Affiliation(s)
- S. V. N. Sreenivasu
- Department of Computer Science and Engineering, Narasaraopeta Engineering College, Narasaraopeta, Andhra Pradesh 522601, India
| | - S. Gomathi
- Department of Information Technology, Sri Sairam Engineering College, Chennai, Tamil Nadu 602109, India
| | - M. Jogendra Kumar
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Andhra Pradesh 522502, India
| | - Lavanya Prathap
- Department of Anatomy, Saveetha Dental College and Hospital, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamil Nadu 600077, India
| | - Abhishek Madduri
- Department of Engineering Management, Duke University, North Carolina 27708, USA
| | - Khalid M. A. Almutairi
- Department of Community Health Sciences, College of Applied Medical Sciences, King Saud University, P. O. Box: 10219, Riyadh-11433, Saudi Arabia
| | - Wadi B. Alonazi
- Health Administration Department, College of Business Administration, King Saud University, PO Box: 71115, Riyadh-11587, Saudi Arabia
| | - D. Kali
- Department of Mechanical Engineering, Ryerson University, Canada
| | | |
Collapse
|
20
|
Intelligent Computer-Aided Model for Efficient Diagnosis of Digital Breast Tomosynthesis 3D Imaging Using Deep Learning. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115736] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Digital breast tomosynthesis (DBT) is a highly promising 3D imaging modality for breast diagnosis. Tissue overlapping is a challenge with traditional 2D mammograms; however, since digital breast tomosynthesis can obtain three-dimensional images, tissue overlapping is reduced, making it easier for radiologists to detect abnormalities and resulting in improved and more accurate diagnosis. In this study, a new computer-aided multi-class diagnosis system is proposed that integrates DBT augmentation and colour feature map with a modified deep learning architecture (Mod_AlexNet). To the proposed modified deep learning architecture (Mod AlexNet), an optimization layer with multiple high performing optimizers is incorporated so that it can be evaluated and optimised using various optimization techniques. Two experimental scenarios are applied, the first scenario proposed a computer-aided diagnosis (CAD) model that integrated DBT augmentation, image enhancement techniques and colour feature mapping with six deep learning models for feature extraction, including ResNet-18, AlexNet, GoogleNet, MobileNetV2, VGG-16 and DenseNet-201, to efficiently classify DBT slices. The second scenario compared the performance of the newly proposed Mod_AlexNet architecture and traditional AlexNet, using several optimization techniques and different evaluation performance metrics were computed. The optimization techniques included adaptive moment estimation (Adam), root mean squared propagation (RMSProp), and stochastic gradient descent with momentum (SGDM), for different batch sizes, including 32, 64 and 512. Experiments have been conducted on a large benchmark dataset of breast tomography scans. The performance of the first scenario was compared in terms of accuracy, precision, sensitivity, specificity, runtime, and f1-score. While in the second scenario, performance was compared in terms of training accuracy, training loss, and test accuracy. In the first scenario, results demonstrated that AlexNet reported improvement rates of 1.69%, 5.13%, 6.13%, 4.79% and 1.6%, compared to ResNet-18, MobileNetV2, GoogleNet, DenseNet-201 and VGG16, respectively. Experimental analysis with different optimization techniques and batch sizes demonstrated that the proposed Mod_AlexNet architecture outperformed AlexNet in terms of test accuracy with improvement rates of 3.23%, 1.79% and 1.34% when compared using SGDM, Adam, and RMSProp optimizers, respectively.
Collapse
|
21
|
Lakshmi MJ, Nagaraja Rao S. Brain tumor magnetic resonance image classification: a deep learning approach. Soft comput 2022. [DOI: 10.1007/s00500-022-07163-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
22
|
Automated Breast Cancer Detection Models Based on Transfer Learning. SENSORS 2022; 22:s22030876. [PMID: 35161622 PMCID: PMC8838322 DOI: 10.3390/s22030876] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 12/28/2021] [Accepted: 01/19/2022] [Indexed: 02/06/2023]
Abstract
Breast cancer is among the leading causes of mortality for females across the planet. It is essential for the well-being of women to develop early detection and diagnosis techniques. In mammography, focus has contributed to the use of deep learning (DL) models, which have been utilized by radiologists to enhance the needed processes to overcome the shortcomings of human observers. The transfer learning method is being used to distinguish malignant and benign breast cancer by fine-tuning multiple pre-trained models. In this study, we introduce a framework focused on the principle of transfer learning. In addition, a mixture of augmentation strategies were used to prevent overfitting and produce stable outcomes by increasing the number of mammographic images; including several rotation combinations, scaling, and shifting. On the Mammographic Image Analysis Society (MIAS) dataset, the proposed system was evaluated and achieved an accuracy of 89.5% using (residual network-50) ResNet50, and achieved an accuracy of 70% using the Nasnet-Mobile network. The proposed system demonstrated that pre-trained classification networks are significantly more effective and efficient, making them more acceptable for medical imaging, particularly for small training datasets.
Collapse
|
23
|
Deep convolutional neural networks for computer-aided breast cancer diagnostic: a survey. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06804-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
24
|
Classification of Brain MRI Tumor Images Based on Deep Learning PGGAN Augmentation. Diagnostics (Basel) 2021; 11:diagnostics11122343. [PMID: 34943580 PMCID: PMC8700152 DOI: 10.3390/diagnostics11122343] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 12/02/2021] [Accepted: 12/07/2021] [Indexed: 12/16/2022] Open
Abstract
The wide prevalence of brain tumors in all age groups necessitates having the ability to make an early and accurate identification of the tumor type and thus select the most appropriate treatment plans. The application of convolutional neural networks (CNNs) has helped radiologists to more accurately classify the type of brain tumor from magnetic resonance images (MRIs). The learning of CNN suffers from overfitting if a suboptimal number of MRIs are introduced to the system. Recognized as the current best solution to this problem, the augmentation method allows for the optimization of the learning stage and thus maximizes the overall efficiency. The main objective of this study is to examine the efficacy of a new approach to the classification of brain tumor MRIs through the use of a VGG19 features extractor coupled with one of three types of classifiers. A progressive growing generative adversarial network (PGGAN) augmentation model is used to produce ‘realistic’ MRIs of brain tumors and help overcome the shortage of images needed for deep learning. Results indicated the ability of our framework to classify gliomas, meningiomas, and pituitary tumors more accurately than in previous studies with an accuracy of 98.54%. Other performance metrics were also examined.
Collapse
|
25
|
Zou H, Gong X, Luo J, Li T. A Robust Breast ultrasound segmentation method under noisy annotations. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 209:106327. [PMID: 34428680 DOI: 10.1016/j.cmpb.2021.106327] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 07/30/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE A large-scale training data and accurate annotations are fundamental for current segmentation networks. However, the characteristic artifacts of ultrasound images always make the annotation task complicated, such as attenuation, speckle, shadows and signal dropout. Further complications arise as the contrast between the region of interest and background is often low. Without double-check from professionals, it is hard to guarantee that there is no noisy annotation in segmentation datasets. However, among the deep learning methods applied to ultrasound segmentation so far, no one can solve this problem. METHOD Given a dataset with poorly labeled masks, including a certain amount of noises, we propose an end-to-end noisy annotation tolerance network (NAT-Net). NAT-Net can detect noise by the proposed noise index (NI) and dynamically correct noisy annotations in the training stage. Simultaneously, noise index is used to correct the noise along with the output of the learning model. This method does not need any auxiliary clean datasets or prior knowledge of noise distributions, so it is more general, robust and easier to apply than the existing methods. RESULTS NAT-Net outperforms previous state-of-the-art methods on synthesized data with different noise ratio. For real-world dataset with more complex noise types, the IoU of NAT-Net is higher than that of state-of-art approaches by nearly 6%. Experimental results show that our method can also achieve good results compared with the existing methods for clean dataset. CONCLUSION The NAT-Net reduces manual interaction of data annotation, reduces dependence on medical personnel. After tumor segmentation, disease diagnosis efficiency is improved, which provides an auxiliary strategies for subsequent medical diagnosis systems based on ultrasound.
Collapse
Affiliation(s)
- Haipeng Zou
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan, China.
| | - Xun Gong
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan, China.
| | - Jun Luo
- Sichuan Academy of Medical Sciences Sichuan Provincial Peoples Hospital, Chengdu, Sichuan, China.
| | - Tianrui Li
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan, China.
| |
Collapse
|
26
|
Yu X, Zhou Q, Wang S, Zhang Y. A systematic survey of deep learning in breast cancer. INT J INTELL SYST 2021. [DOI: 10.1002/int.22622] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Affiliation(s)
- Xiang Yu
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| | - Qinghua Zhou
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| | - Shuihua Wang
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| | - Yu‐Dong Zhang
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| |
Collapse
|
27
|
Grimm LJ. Radiomics: A Primer for Breast Radiologists. JOURNAL OF BREAST IMAGING 2021; 3:276-287. [PMID: 38424774 DOI: 10.1093/jbi/wbab014] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Indexed: 03/02/2024]
Abstract
Radiomics has a long-standing history in breast imaging with computer-aided detection (CAD) for screening mammography developed in the late 20th century. Although conventional CAD had widespread adoption, the clinical benefits for experienced breast radiologists were debatable due to high false-positive marks and subsequent increased recall rates. The dramatic growth in recent years of artificial intelligence-based analysis, including machine learning and deep learning, has provided numerous opportunities for improved modern radiomics work in breast imaging. There has been extensive radiomics work in mammography, digital breast tomosynthesis, MRI, ultrasound, PET-CT, and combined multimodality imaging. Specific radiomics outcomes of interest have been diverse, including CAD, prediction of response to neoadjuvant therapy, lesion classification, and survival, among other outcomes. Additionally, the radiogenomics subfield that correlates radiomics features with genetics has been very proliferative, in parallel with the clinical validation of breast cancer molecular subtypes and gene expression assays. Despite the promise of radiomics, there are important challenges related to image normalization, limited large unbiased data sets, and lack of external validation. Much of the radiomics work to date has been exploratory using single-institution retrospective series for analysis, but several promising lines of investigation have made the leap to clinical practice with commercially available products. As a result, breast radiologists will increasingly be incorporating radiomics-based tools into their daily practice in the near future. Therefore, breast radiologists must have a broad understanding of the scope, applications, and limitations of radiomics work.
Collapse
Affiliation(s)
- Lars J Grimm
- Duke University, Department of Radiology, Durham, NC, USA
| |
Collapse
|
28
|
Bai J, Posner R, Wang T, Yang C, Nabavi S. Applying deep learning in digital breast tomosynthesis for automatic breast cancer detection: A review. Med Image Anal 2021; 71:102049. [PMID: 33901993 DOI: 10.1016/j.media.2021.102049] [Citation(s) in RCA: 59] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 02/11/2021] [Accepted: 03/19/2021] [Indexed: 02/07/2023]
Abstract
The relatively recent reintroduction of deep learning has been a revolutionary force in the interpretation of diagnostic imaging studies. However, the technology used to acquire those images is undergoing a revolution itself at the very same time. Digital breast tomosynthesis (DBT) is one such technology, which has transformed the field of breast imaging. DBT, a form of three-dimensional mammography, is rapidly replacing the traditional two-dimensional mammograms. These parallel developments in both the acquisition and interpretation of breast images present a unique case study in how modern AI systems can be designed to adapt to new imaging methods. They also present a unique opportunity for co-development of both technologies that can better improve the validity of results and patient outcomes. In this review, we explore the ways in which deep learning can be best integrated into breast cancer screening workflows using DBT. We first explain the principles behind DBT itself and why it has become the gold standard in breast screening. We then survey the foundations of deep learning methods in diagnostic imaging, and review the current state of research into AI-based DBT interpretation. Finally, we present some of the limitations of integrating AI into clinical practice and the opportunities these present in this burgeoning field.
Collapse
Affiliation(s)
- Jun Bai
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT 06269, USA
| | - Russell Posner
- University of Connecticut School of Medicine, 263 Farmington Ave. Farmington, CT 06030, USA
| | - Tianyu Wang
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT 06269, USA
| | - Clifford Yang
- University of Connecticut School of Medicine, 263 Farmington Ave. Farmington, CT 06030, USA; Department of Radiology, UConn Health, 263 Farmington Ave. Farmington, CT 06030, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT 06269, USA.
| |
Collapse
|
29
|
Ricciardi R, Mettivier G, Staffa M, Sarno A, Acampora G, Minelli S, Santoro A, Antignani E, Orientale A, Pilotti I, Santangelo V, D'Andria P, Russo P. A deep learning classifier for digital breast tomosynthesis. Phys Med 2021; 83:184-193. [DOI: 10.1016/j.ejmp.2021.03.021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 02/04/2021] [Accepted: 03/13/2021] [Indexed: 10/21/2022] Open
|
30
|
Abdelrahman L, Al Ghamdi M, Collado-Mesa F, Abdel-Mottaleb M. Convolutional neural networks for breast cancer detection in mammography: A survey. Comput Biol Med 2021; 131:104248. [PMID: 33631497 DOI: 10.1016/j.compbiomed.2021.104248] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 01/08/2021] [Accepted: 01/25/2021] [Indexed: 12/17/2022]
Abstract
Despite its proven record as a breast cancer screening tool, mammography remains labor-intensive and has recognized limitations, including low sensitivity in women with dense breast tissue. In the last ten years, Neural Network advances have been applied to mammography to help radiologists increase their efficiency and accuracy. This survey aims to present, in an organized and structured manner, the current knowledge base of convolutional neural networks (CNNs) in mammography. The survey first discusses traditional Computer Assisted Detection (CAD) and more recently developed CNN-based models for computer vision in mammography. It then presents and discusses the literature on available mammography training datasets. The survey then presents and discusses current literature on CNNs for four distinct mammography tasks: (1) breast density classification, (2) breast asymmetry detection and classification, (3) calcification detection and classification, and (4) mass detection and classification, including presenting and comparing the reported quantitative results for each task and the pros and cons of the different CNN-based approaches. Then, it offers real-world applications of CNN CAD algorithms by discussing current Food and Drug Administration (FDA) approved models. Finally, this survey highlights the potential opportunities for future work in this field. The material presented and discussed in this survey could serve as a road map for developing CNN-based solutions to improve mammographic detection of breast cancer further.
Collapse
Affiliation(s)
- Leila Abdelrahman
- University of Miami, Department of Electrical and Computer Engineering, Memorial Dr, Coral Gables, FL, 33146, USA
| | - Manal Al Ghamdi
- Umm Al-Qura University, Department of Computer Science, Alawali, Mecca, 24381, Saudi Arabia
| | - Fernando Collado-Mesa
- University of Miami Miller School of Medicine, Department of Radiology, 1115 NW 14th Street Miami, FL, 33136, USA
| | - Mohamed Abdel-Mottaleb
- University of Miami, Department of Electrical and Computer Engineering, Memorial Dr, Coral Gables, FL, 33146, USA.
| |
Collapse
|
31
|
Chugh G, Kumar S, Singh N. Survey on Machine Learning and Deep Learning Applications in Breast Cancer Diagnosis. Cognit Comput 2021. [DOI: 10.1007/s12559-020-09813-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
32
|
|
33
|
Fan M, Zheng H, Zheng S, You C, Gu Y, Gao X, Peng W, Li L. Mass Detection and Segmentation in Digital Breast Tomosynthesis Using 3D-Mask Region-Based Convolutional Neural Network: A Comparative Analysis. Front Mol Biosci 2020; 7:599333. [PMID: 33263004 PMCID: PMC7686533 DOI: 10.3389/fmolb.2020.599333] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 10/21/2020] [Indexed: 01/04/2023] Open
Abstract
Digital breast tomosynthesis (DBT) is an emerging breast cancer screening and diagnostic modality that uses quasi-three-dimensional breast images to provide detailed assessments of the dense tissue within the breast. In this study, a framework of a 3D-Mask region-based convolutional neural network (3D-Mask RCNN) computer-aided diagnosis (CAD) system was developed for mass detection and segmentation with a comparative analysis of performance on patient subgroups with different clinicopathological characteristics. To this end, 364 samples of DBT data were used and separated into a training dataset (n = 201) and a testing dataset (n = 163). The detection and segmentation results were evaluated on the testing set and on subgroups of patients with different characteristics, including different age ranges, lesion sizes, histological types, lesion shapes and breast densities. The results of our 3D-Mask RCNN framework were compared with those of the 2D-Mask RCNN and Faster RCNN methods. For lesion-based mass detection, the sensitivity of 3D-Mask RCNN-based CAD was 90% with 0.8 false positives (FPs) per lesion, whereas the sensitivity of the 2D-Mask RCNN- and Faster RCNN-based CAD was 90% at 1.3 and 2.37 FPs/lesion, respectively. For breast-based mass detection, the 3D-Mask RCNN generated a sensitivity of 90% at 0.83 FPs/breast, and this framework is better than the 2D-Mask RCNN and Faster RCNN, which generated a sensitivity of 90% with 1.24 and 2.38 FPs/breast, respectively. Additionally, the 3D-Mask RCNN achieved significantly (p < 0.05) better performance than the 2D methods on subgroups of samples with characteristics of ages ranged from 40 to 49 years, malignant tumors, spiculate and irregular masses and dense breast, respectively. Lesion segmentation using the 3D-Mask RCNN achieved an average precision (AP) of 0.934 and a false negative rate (FNR) of 0.053, which are better than those achieved by the 2D methods. The results suggest that the 3D-Mask RCNN CAD framework has advantages over 2D-based mass detection on both the whole data and subgroups with different characteristics.
Collapse
Affiliation(s)
- Ming Fan
- Institute of Biomedical Engineering and Instrumentation, Hangzhou Dianzi University, Hangzhou, China
| | - Huizhong Zheng
- Institute of Biomedical Engineering and Instrumentation, Hangzhou Dianzi University, Hangzhou, China
| | - Shuo Zheng
- Institute of Biomedical Engineering and Instrumentation, Hangzhou Dianzi University, Hangzhou, China
| | - Chao You
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Yajia Gu
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Xin Gao
- Computational Bioscience Research Center (CBRC), Computer, Electrical and Mathematical Sciences and Engineering Division (CEMSE), King Abdullah University of Science and Technology (KAUST), Thuwal, Saudi Arabia
| | - Weijun Peng
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Lihua Li
- Institute of Biomedical Engineering and Instrumentation, Hangzhou Dianzi University, Hangzhou, China
| |
Collapse
|
34
|
Ahmad HM, Khan MJ, Yousaf A, Ghuffar S, Khurshid K. Deep Learning: A Breakthrough in Medical Imaging. Curr Med Imaging 2020; 16:946-956. [PMID: 33081657 DOI: 10.2174/1573405615666191219100824] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 11/25/2019] [Accepted: 12/06/2019] [Indexed: 02/08/2023]
Abstract
Deep learning has attracted great attention in the medical imaging community as a promising solution for automated, fast and accurate medical image analysis, which is mandatory for quality healthcare. Convolutional neural networks and its variants have become the most preferred and widely used deep learning models in medical image analysis. In this paper, concise overviews of the modern deep learning models applied in medical image analysis are provided and the key tasks performed by deep learning models, i.e. classification, segmentation, retrieval, detection, and registration are reviewed in detail. Some recent researches have shown that deep learning models can outperform medical experts in certain tasks. With the significant breakthroughs made by deep learning methods, it is expected that patients will soon be able to safely and conveniently interact with AI-based medical systems and such intelligent systems will actually improve patient healthcare. There are various complexities and challenges involved in deep learning-based medical image analysis, such as limited datasets. But researchers are actively working in this area to mitigate these challenges and further improve health care with AI.
Collapse
Affiliation(s)
- Hafiz Mughees Ahmad
- Artificial Intelligence and Computer Vision (iVision) Lab, Department of Electrical Engineering, Institute of Space
Technology, Islamabad, Pakistan
| | - Muhammad Jaleed Khan
- Artificial Intelligence and Computer Vision (iVision) Lab, Department of Electrical Engineering, Institute of Space
Technology, Islamabad, Pakistan
| | - Adeel Yousaf
- Artificial Intelligence and Computer Vision (iVision) Lab, Department of Electrical Engineering, Institute of Space
Technology, Islamabad, Pakistan,Department of Avionics Engineering, Institute of Space Technology, Islamabad, Pakistan
| | - Sajid Ghuffar
- Artificial Intelligence and Computer Vision (iVision) Lab, Department of Electrical Engineering, Institute of Space
Technology, Islamabad, Pakistan,Department of Space Science, Institute of Space Technology, Islamabad, Pakistan
| | - Khurram Khurshid
- Artificial Intelligence and Computer Vision (iVision) Lab, Department of Electrical Engineering, Institute of Space
Technology, Islamabad, Pakistan
| |
Collapse
|
35
|
Tran WT, Sadeghi-Naini A, Lu FI, Gandhi S, Meti N, Brackstone M, Rakovitch E, Curpen B. Computational Radiology in Breast Cancer Screening and Diagnosis Using Artificial Intelligence. Can Assoc Radiol J 2020; 72:98-108. [DOI: 10.1177/0846537120949974] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Breast cancer screening has been shown to significantly reduce mortality in women. The increased utilization of screening examinations has led to growing demands for rapid and accurate diagnostic reporting. In modern breast imaging centers, full-field digital mammography (FFDM) has replaced traditional analog mammography, and this has opened new opportunities for developing computational frameworks to automate detection and diagnosis. Artificial intelligence (AI), and its subdomain of deep learning, is showing promising results and improvements on diagnostic accuracy, compared to previous computer-based methods, known as computer-aided detection and diagnosis. In this commentary, we review the current status of computational radiology, with a focus on deep neural networks used in breast cancer screening and diagnosis. Recent studies are developing a new generation of computer-aided detection and diagnosis systems, as well as leveraging AI-driven tools to efficiently interpret digital mammograms, and breast tomosynthesis imaging. The use of AI in computational radiology necessitates transparency and rigorous testing. However, the overall impact of AI to radiology workflows will potentially yield more efficient and standardized processes as well as improve the level of care to patients with high diagnostic accuracy.
Collapse
Affiliation(s)
- William T. Tran
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Canada
| | - Ali Sadeghi-Naini
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Electrical Engineering and Computer Science, Lassonde School of Engineering, York University, Toronto, Canada
| | - Fang-I Lu
- Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, Canada
| | - Sonal Gandhi
- Division of Medical Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Medicine, University of Toronto, Toronto, Canada
| | - Nicholas Meti
- Division of Medical Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada
| | - Muriel Brackstone
- Department of Surgical Oncology, London Health Sciences Centre, London, Ontario
| | - Eileen Rakovitch
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Canada
| | - Belinda Curpen
- Division of Breast Imaging, Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Medical Imaging, University of Toronto, Toronto, Canada
| |
Collapse
|
36
|
Geras KJ, Mann RM, Moy L. Artificial Intelligence for Mammography and Digital Breast Tomosynthesis: Current Concepts and Future Perspectives. Radiology 2019; 293:246-259. [PMID: 31549948 DOI: 10.1148/radiol.2019182627] [Citation(s) in RCA: 168] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Although computer-aided diagnosis (CAD) is widely used in mammography, conventional CAD programs that use prompts to indicate potential cancers on the mammograms have not led to an improvement in diagnostic accuracy. Because of the advances in machine learning, especially with use of deep (multilayered) convolutional neural networks, artificial intelligence has undergone a transformation that has improved the quality of the predictions of the models. Recently, such deep learning algorithms have been applied to mammography and digital breast tomosynthesis (DBT). In this review, the authors explain how deep learning works in the context of mammography and DBT and define the important technical challenges. Subsequently, they discuss the current status and future perspectives of artificial intelligence-based clinical applications for mammography, DBT, and radiomics. Available algorithms are advanced and approach the performance of radiologists-especially for cancer detection and risk prediction at mammography. However, clinical validation is largely lacking, and it is not clear how the power of deep learning should be used to optimize practice. Further development of deep learning models is necessary for DBT, and this requires collection of larger databases. It is expected that deep learning will eventually have an important role in DBT, including the generation of synthetic images.
Collapse
Affiliation(s)
- Krzysztof J Geras
- From the Center for Biomedical Imaging (K.J.G., L.M.), Center for Data Science (K.J.G.), Center for Advanced Imaging Innovation and Research (L.M.), and Laura and Isaac Perlmutter Cancer Center (L.M.), New York University School of Medicine, 160 E 34th St, 3rd Floor, New York, NY 10016; Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Nijmegen, the Netherlands (R.M.M.); and Department of Radiology, the Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands (R.M.M.)
| | - Ritse M Mann
- From the Center for Biomedical Imaging (K.J.G., L.M.), Center for Data Science (K.J.G.), Center for Advanced Imaging Innovation and Research (L.M.), and Laura and Isaac Perlmutter Cancer Center (L.M.), New York University School of Medicine, 160 E 34th St, 3rd Floor, New York, NY 10016; Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Nijmegen, the Netherlands (R.M.M.); and Department of Radiology, the Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands (R.M.M.)
| | - Linda Moy
- From the Center for Biomedical Imaging (K.J.G., L.M.), Center for Data Science (K.J.G.), Center for Advanced Imaging Innovation and Research (L.M.), and Laura and Isaac Perlmutter Cancer Center (L.M.), New York University School of Medicine, 160 E 34th St, 3rd Floor, New York, NY 10016; Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Nijmegen, the Netherlands (R.M.M.); and Department of Radiology, the Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Amsterdam, the Netherlands (R.M.M.)
| |
Collapse
|
37
|
Wang W, Langlois R, Langlois M, Genchev GZ, Wang X, Lu H. Functional Site Discovery From Incomplete Training Data: A Case Study With Nucleic Acid-Binding Proteins. Front Genet 2019; 10:729. [PMID: 31543893 PMCID: PMC6729729 DOI: 10.3389/fgene.2019.00729] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Accepted: 07/11/2019] [Indexed: 12/27/2022] Open
Abstract
Function annotation efforts provide a foundation to our understanding of cellular processes and the functioning of the living cell. This motivates high-throughput computational methods to characterize new protein members of a particular function. Research work has focused on discriminative machine-learning methods, which promise to make efficient, de novo predictions of protein function. Furthermore, available function annotation exists predominantly for individual proteins rather than residues of which only a subset is necessary for the conveyance of a particular function. This limits discriminative approaches to predicting functions for which there is sufficient residue-level annotation, e.g., identification of DNA-binding proteins or where an excellent global representation can be divined. Complete understanding of the various functions of proteins requires discovery and functional annotation at the residue level. Herein, we cast this problem into the setting of multiple-instance learning, which only requires knowledge of the protein’s function yet identifies functionally relevant residues and need not rely on homology. We developed a new multiple-instance leaning algorithm derived from AdaBoost and benchmarked this algorithm against two well-studied protein function prediction tasks: annotating proteins that bind DNA and RNA. This algorithm outperforms certain previous approaches in annotating protein function while identifying functionally relevant residues involved in binding both DNA and RNA, and on one protein-DNA benchmark, it achieves near perfect classification.
Collapse
Affiliation(s)
- Wenchuan Wang
- SJTU-Yale Joint Center for Biostatistics and Data Science, Department of Bioinformatics and Biostatistics, College of Life Science and Biotechnology, Shanghai Jiao Tong University, Shanghai, Chinas
| | - Robert Langlois
- Department of Bioengineering and Department of Computer Science, University of Illinois at Chicago, Chicago, IL, United States
| | - Marina Langlois
- Department of Bioengineering and Department of Computer Science, University of Illinois at Chicago, Chicago, IL, United States
| | - Georgi Z Genchev
- SJTU-Yale Joint Center for Biostatistics and Data Science, Department of Bioinformatics and Biostatistics, College of Life Science and Biotechnology, Shanghai Jiao Tong University, Shanghai, Chinas.,Department of Bioengineering and Department of Computer Science, University of Illinois at Chicago, Chicago, IL, United States.,Bulgarian Institute for Genomics and Precision Medicine, Sofia, Bulgaria
| | - Xiaolei Wang
- SJTU-Yale Joint Center for Biostatistics and Data Science, Department of Bioinformatics and Biostatistics, College of Life Science and Biotechnology, Shanghai Jiao Tong University, Shanghai, Chinas.,Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
| | - Hui Lu
- SJTU-Yale Joint Center for Biostatistics and Data Science, Department of Bioinformatics and Biostatistics, College of Life Science and Biotechnology, Shanghai Jiao Tong University, Shanghai, Chinas.,Department of Bioengineering and Department of Computer Science, University of Illinois at Chicago, Chicago, IL, United States.,Center for Biomedical Informatics, Shanghai Children's Hospital, Shanghai, China
| |
Collapse
|
38
|
|
39
|
Bliznakova K, Dukov N, Feradov F, Gospodinova G, Bliznakov Z, Russo P, Mettivier G, Bosmans H, Cockmartin L, Sarno A, Kostova-Lefterova D, Encheva E, Tsapaki V, Bulyashki D, Buliev I. Development of breast lesions models database. Phys Med 2019; 64:293-303. [PMID: 31387779 DOI: 10.1016/j.ejmp.2019.07.017] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2018] [Revised: 07/01/2019] [Accepted: 07/22/2019] [Indexed: 12/11/2022] Open
Abstract
PURPOSE We present the development and the current state of the MaXIMA Breast Lesions Models Database, which is intended to provide researchers with both segmented and mathematical computer-based breast lesion models with realistic shape. METHODS The database contains various 3D images of breast lesions of irregular shapes, collected from routine patient examinations or dedicated scientific experiments. It also contains images of simulated tumour models. In order to extract the 3D shapes of the breast cancers from patient images, an in-house segmentation algorithm was developed for the analysis of 50 tomosynthesis sets from patients diagnosed with malignant and benign lesions. In addition, computed tomography (CT) scans of three breast mastectomy cases were added, as well as five whole-body CT scans. The segmentation algorithm includes a series of image processing operations and region-growing techniques with minimal interaction from the user, with the purpose of finding and segmenting the areas of the lesion. Mathematically modelled computational breast lesions, also stored in the database, are based on the 3D random walk approach. RESULTS The MaXIMA Imaging Database currently contains 50 breast cancer models obtained by segmentation of 3D patient breast tomosynthesis images; 8 models obtained by segmentation of whole body and breast cadavers CT images; and 80 models based on a mathematical algorithm. Each record in the database is supported with relevant information. Two applications of the database are highlighted: inserting the lesions into computationally generated breast phantoms and an approach in generating mammography images with variously shaped breast lesion models from the database for evaluation purposes. Both cases demonstrate the implementation of multiple scenarios and of an unlimited number of cases, which can be used for further software modelling and investigation of breast imaging techniques. The created database interface is web-based, user friendly and is intended to be made freely accessible through internet after the completion of the MaXIMA project. CONCLUSIONS The developed database will serve as an imaging data source for researchers, working on breast diagnostic imaging and on improving early breast cancer detection techniques, using existing or newly developed imaging modalities.
Collapse
Affiliation(s)
- Kristina Bliznakova
- Laboratory of Computer Simulations in Medicine, Technical University of Varna, Varna, Bulgaria.
| | - Nikolay Dukov
- Laboratory of Computer Simulations in Medicine, Technical University of Varna, Varna, Bulgaria
| | - Firgan Feradov
- Laboratory of Computer Simulations in Medicine, Technical University of Varna, Varna, Bulgaria
| | - Galja Gospodinova
- Laboratory of Computer Simulations in Medicine, Technical University of Varna, Varna, Bulgaria
| | - Zhivko Bliznakov
- Laboratory of Computer Simulations in Medicine, Technical University of Varna, Varna, Bulgaria
| | - Paolo Russo
- Dipartimento di Fisica "Ettore Pancini", Universita' di Napoli Federico II and INFN Sezione di Napoli, Napoli, Italy
| | - Giovanni Mettivier
- Dipartimento di Fisica "Ettore Pancini", Universita' di Napoli Federico II and INFN Sezione di Napoli, Napoli, Italy
| | - Hilde Bosmans
- Department of Radiology, Katholieke University of Leuven, Leuven, Belgium
| | - Lesley Cockmartin
- Department of Radiology, Katholieke University of Leuven, Leuven, Belgium
| | - Antonio Sarno
- Dipartimento di Fisica "Ettore Pancini", Universita' di Napoli Federico II and INFN Sezione di Napoli, Napoli, Italy
| | | | - Elitsa Encheva
- Radiotherapy Department, University Hospital "St. Marina", Medical University of Varna, Varna, Bulgaria
| | - Virginia Tsapaki
- Medical Physics Department, Konstantopoulio General Hospital, Nea Ionia, Attiki, Greece
| | - Daniel Bulyashki
- Surgery Department, University Hospital "St. Marina", Medical University of Varna, Varna, Bulgaria
| | - Ivan Buliev
- Laboratory of Computer Simulations in Medicine, Technical University of Varna, Varna, Bulgaria
| |
Collapse
|
40
|
Deepak S, Ameer PM. Brain tumor classification using deep CNN features via transfer learning. Comput Biol Med 2019; 111:103345. [PMID: 31279167 DOI: 10.1016/j.compbiomed.2019.103345] [Citation(s) in RCA: 303] [Impact Index Per Article: 50.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Revised: 06/26/2019] [Accepted: 06/26/2019] [Indexed: 11/28/2022]
Abstract
Brain tumor classification is an important problem in computer-aided diagnosis (CAD) for medical applications. This paper focuses on a 3-class classification problem to differentiate among glioma, meningioma and pituitary tumors, which form three prominent types of brain tumor. The proposed classification system adopts the concept of deep transfer learning and uses a pre-trained GoogLeNet to extract features from brain MRI images. Proven classifier models are integrated to classify the extracted features. The experiment follows a patient-level five-fold cross-validation process, on MRI dataset from figshare. The proposed system records a mean classification accuracy of 98%, outperforming all state-of-the-art methods. Other performance measures used in the study are the area under the curve (AUC), precision, recall, F-score and specificity. In addition, the paper addresses a practical aspect by evaluating the system with fewer training samples. The observations of the study imply that transfer learning is a useful technique when the availability of medical images is limited. The paper provides an analytical discussion on misclassifications also.
Collapse
Affiliation(s)
- S Deepak
- Department of Electronics & Communication Engineering, National Institute of Technology, Calicut, India.
| | - P M Ameer
- Department of Electronics & Communication Engineering, National Institute of Technology, Calicut, India
| |
Collapse
|
41
|
Murtaza G, Shuib L, Abdul Wahab AW, Mujtaba G, Mujtaba G, Nweke HF, Al-garadi MA, Zulfiqar F, Raza G, Azmi NA. Deep learning-based breast cancer classification through medical imaging modalities: state of the art and research challenges. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09716-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
42
|
de Oliveira HC, Mencattini A, Casti P, Catani JH, de Barros N, Gonzaga A, Martinelli E, da Costa Vieira MA. A cross-cutting approach for tracking architectural distortion locii on digital breast tomosynthesis slices. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.01.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
43
|
Fan M, Li Y, Zheng S, Peng W, Tang W, Li L. Computer-aided detection of mass in digital breast tomosynthesis using a faster region-based convolutional neural network. Methods 2019; 166:103-111. [PMID: 30771490 DOI: 10.1016/j.ymeth.2019.02.010] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 02/05/2019] [Accepted: 02/11/2019] [Indexed: 01/01/2023] Open
Abstract
Digital breast tomosynthesis (DBT) is a newly developed three-dimensional tomographic imaging modality in the field of breast cancer screening designed to alleviate the limitations of conventional digital mammography-based breast screening methods. A computer-aided detection (CAD) system was designed for masses in DBT using a faster region-based convolutional neural network (faster-RCNN). To this end, a data set was collected, including 89 patients with 105 masses. An efficient detection architecture of convolution neural network with a region proposal network (RPN) was used for each slice to generate region proposals (i.e., bounding boxes) with a mass likelihood score. In each DBT volume, a slice fusion procedure was used to merge the detection results on consecutive 2D slices into one 3D DBT volume. The performance of the CAD system was evaluated using free-response receiver operating characteristic (FROC) curves. Our RCNN-based CAD system was compared with a deep convolutional neural network (DCNN)-based CAD system. The RCNN-based CAD generated a performance with an area under the ROC (AUC) of 0.96, whereas the DCNN-based CAD achieved a performance with AUC of 0.92. For lesion-based mass detection, the sensitivity of RCNN-based CAD was 90% at 1.54 false positive (FP) per volume, whereas the sensitivity of DCNN-based CAD was 90% at 2.81 FPs/volume. For breast-based mass detection, RCNN-based CAD generated a sensitivity of 90% at 0.76 FP/breast, which is significantly increased compared with the DCNN-based CAD with a sensitivity of 90% at 2.25 FPs/breast. The results suggest that the faster R-CNN has the potential to augment the prescreening and FP reduction in the CAD system for masses.
Collapse
Affiliation(s)
- Ming Fan
- Institute of Biomedical Engineering and Instrumentation, Hangzhou Dianzi University, High Education Zone, Hangzhou 310018, China.
| | - Yuanzhe Li
- Institute of Biomedical Engineering and Instrumentation, Hangzhou Dianzi University, High Education Zone, Hangzhou 310018, China
| | - Shuo Zheng
- Institute of Biomedical Engineering and Instrumentation, Hangzhou Dianzi University, High Education Zone, Hangzhou 310018, China
| | - Weijun Peng
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Wei Tang
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China.
| | - Lihua Li
- Institute of Biomedical Engineering and Instrumentation, Hangzhou Dianzi University, High Education Zone, Hangzhou 310018, China.
| |
Collapse
|
44
|
Boosting support vector machines for cancer discrimination tasks. Comput Biol Med 2018; 101:236-249. [DOI: 10.1016/j.compbiomed.2018.08.006] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2018] [Revised: 07/31/2018] [Accepted: 08/04/2018] [Indexed: 01/17/2023]
|