1
|
Sharifi G, Hajibeygi R, Zamani SAM, Easa AM, Bahrami A, Eshraghi R, Moafi M, Ebrahimi MJ, Fathi M, Mirjafari A, Chan JS, Dixe de Oliveira Santo I, Anar MA, Rezaei O, Tu LH. Diagnostic performance of neural network algorithms in skull fracture detection on CT scans: a systematic review and meta-analysis. Emerg Radiol 2025; 32:97-111. [PMID: 39680295 DOI: 10.1007/s10140-024-02300-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Accepted: 11/08/2024] [Indexed: 12/17/2024]
Abstract
BACKGROUND AND AIM The potential intricacy of skull fractures as well as the complexity of underlying anatomy poses diagnostic hurdles for radiologists evaluating computed tomography (CT) scans. The necessity for automated diagnostic tools has been brought to light by the shortage of radiologists and the growing demand for rapid and accurate fracture diagnosis. Convolutional Neural Networks (CNNs) are a potential new class of medical imaging technologies that use deep learning (DL) to improve diagnosis accuracy. The objective of this systematic review and meta-analysis is to assess how well CNN models diagnose skull fractures on CT images. METHODS PubMed, Scopus, and Web of Science were searched for studies published before February 2024 that used CNN models to detect skull fractures on CT scans. Meta-analyses were conducted for area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy. Egger's and Begg's tests were used to assess publication bias. RESULTS Meta-analysis was performed for 11 studies with 20,798 patients. Pooled average AUC for implementing pre-training for transfer learning in CNN models within their training model's architecture was 0.96 ± 0.02. The pooled averages of the studies' sensitivity and specificity were 1.0 and 0.93, respectively. The accuracy was obtained 0.92 ± 0.04. Studies showed heterogeneity, which was explained by differences in model topologies, training models, and validation techniques. There was no significant publication bias detected. CONCLUSION CNN models perform well in identifying skull fractures on CT scans. Although there is considerable heterogeneity and possibly publication bias, the results suggest that CNNs have the potential to improve diagnostic accuracy in the imaging of acute skull trauma. To further enhance these models' practical applicability, future studies could concentrate on the utility of DL models in prospective clinical trials.
Collapse
Affiliation(s)
- Guive Sharifi
- Skull base Research Center, Loghman Hakim Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ramtin Hajibeygi
- Tehran University of Medical Sciences, School of Medicine, Tehran, Iran
| | | | - Ahmed Mohamedbaqer Easa
- Department of Radiology Technology, Collage of Health and Medical Technology, Al-Ayen Iraqi University, Thi-Qar, 64001, Iraq
| | | | | | - Maral Moafi
- Cell Biology and Anatomical Sciences, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohammad Javad Ebrahimi
- Cell Biology and Anatomical Sciences, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mobina Fathi
- Skull base Research Center, Loghman Hakim Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Arshia Mirjafari
- Department of Radiological Sciences, University of California, Los Angeles, CA, USA
- College of Osteopathic Medicine of The Pacific, Western University of Health Sciences, Pomona, CA, USA
| | - Janine S Chan
- Keck School of Medicine of USC, Los Angeles, CA, USA
| | | | | | - Omidvar Rezaei
- Skull base Research Center, Loghman Hakim Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Long H Tu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, CT, USA.
| |
Collapse
|
2
|
Sun L, Han B, Jiang W, Liu W, Liu B, Tao D, Yu Z, Li C. Multi-scale region selection network in deep features for full-field mammogram classification. Med Image Anal 2025; 100:103399. [PMID: 39615148 DOI: 10.1016/j.media.2024.103399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 11/14/2024] [Accepted: 11/17/2024] [Indexed: 12/16/2024]
Abstract
Early diagnosis and treatment of breast cancer can effectively reduce mortality. Since mammogram is one of the most commonly used methods in the early diagnosis of breast cancer, the classification of mammogram images is an important work of computer-aided diagnosis (CAD) systems. With the development of deep learning in CAD, deep convolutional neural networks have been shown to have the ability to complete the classification of breast cancer tumor patches with high quality, which makes most previous CNN-based full-field mammography classification methods rely on region of interest (ROI) or segmentation annotation to enable the model to locate and focus on small tumor regions. However, the dependence on ROI greatly limits the development of CAD, because obtaining a large number of reliable ROI annotations is expensive and difficult. Some full-field mammography image classification algorithms use multi-stage training or multi-feature extractors to get rid of the dependence on ROI, which increases the computational amount of the model and feature redundancy. In order to reduce the cost of model training and make full use of the feature extraction capability of CNN, we propose a deep multi-scale region selection network (MRSN) in deep features for end-to-end training to classify full-field mammography without ROI or segmentation annotation. Inspired by the idea of multi-example learning and the patch classifier, MRSN filters the feature information and saves only the feature information of the tumor region to make the performance of the full-field image classifier closer to the patch classifier. MRSN first scores different regions under different dimensions to obtain the location information of tumor regions. Then, a few high-scoring regions are selected by location information as feature representations of the entire image, allowing the model to focus on the tumor region. Experiments on two public datasets and one private dataset prove that the proposed MRSN achieves the most advanced performance.
Collapse
Affiliation(s)
- Luhao Sun
- Breast Cancer Center, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China
| | - Bowen Han
- School of Computer Science and Technology, Tongji University, Shanghai 201804, China
| | - Wenzong Jiang
- The College of Oceanography and Space Informatics, China University of Petroleum (East China), Qingdao 266580, China
| | - Weifeng Liu
- The College of Control Science and Engineering, China University of Petroleum (East China), Qingdao 266580, China
| | - Baodi Liu
- The College of Control Science and Engineering, China University of Petroleum (East China), Qingdao 266580, China
| | - Dapeng Tao
- The School of Information Science and Engineering, Yunnan University, Yunnan 650504, China; Yunnan United Vision Technology Co., Ltd., Yunnan 650504, China
| | - Zhiyong Yu
- Breast Cancer Center, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China.
| | - Chao Li
- Breast Cancer Center, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan 250117, China.
| |
Collapse
|
3
|
Luo L, Wang X, Lin Y, Ma X, Tan A, Chan R, Vardhanabhuti V, Chu WC, Cheng KT, Chen H. Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions. IEEE Rev Biomed Eng 2025; 18:130-151. [PMID: 38265911 DOI: 10.1109/rbme.2024.3357877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020. Breast imaging plays a significant role in early diagnosis and intervention to improve the outcome of breast cancer patients. In the past decade, deep learning has shown remarkable progress in breast cancer imaging analysis, holding great promise in interpreting the rich information and complex context of breast imaging modalities. Considering the rapid improvement in deep learning technology and the increasing severity of breast cancer, it is critical to summarize past progress and identify future challenges to be addressed. This paper provides an extensive review of deep learning-based breast cancer imaging research, covering studies on mammograms, ultrasound, magnetic resonance imaging, and digital pathology images over the past decade. The major deep learning methods and applications on imaging-based screening, diagnosis, treatment response prediction, and prognosis are elaborated and discussed. Drawn from the findings of this survey, we present a comprehensive discussion of the challenges and potential avenues for future research in deep learning-based breast cancer imaging.
Collapse
|
4
|
Watson M, Chambers P, Steventon L, Harmsworth King J, Ercia A, Shaw H, Al Moubayed N. From prediction to practice: mitigating bias and data shift in machine-learning models for chemotherapy-induced organ dysfunction across unseen cancers. BMJ ONCOLOGY 2024; 3:e000430. [PMID: 39886186 PMCID: PMC11557724 DOI: 10.1136/bmjonc-2024-000430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Accepted: 10/07/2024] [Indexed: 02/01/2025]
Abstract
Objectives Routine monitoring of renal and hepatic function during chemotherapy ensures that treatment-related organ damage has not occurred and clearance of subsequent treatment is not hindered; however, frequency and timing are not optimal. Model bias and data heterogeneity concerns have hampered the ability of machine learning (ML) to be deployed into clinical practice. This study aims to develop models that could support individualised decisions on the timing of renal and hepatic monitoring while exploring the effect of data shift on model performance. Methods and analysis We used retrospective data from three UK hospitals to develop and validate ML models predicting unacceptable rises in creatinine/bilirubin post cycle 3 for patients undergoing treatment for the following cancers: breast, colorectal, lung, ovarian and diffuse large B-cell lymphoma. Results We extracted 3614 patients with no missing blood test data across cycles 1-6 of chemotherapy treatment. We improved on previous work by including predictions post cycle 3. Optimised for sensitivity, we achieve F2 scores of 0.7773 (bilirubin) and 0.6893 (creatinine) on unseen data. Performance is consistent on tumour types unseen during training (F2 bilirubin: 0.7423, F2 creatinine: 0.6820). Conclusion Our technique highlights the effectiveness of ML in clinical settings, demonstrating the potential to improve the delivery of care. Notably, our ML models can generalise to unseen tumour types. We propose gold-standard bias mitigation steps for ML models: evaluation on multisite data, thorough patient population analysis, and both formalised bias measures and model performance comparisons on patient subgroups. We demonstrate that data aggregation techniques have unintended consequences on model bias.
Collapse
Affiliation(s)
- Matthew Watson
- Department of Computer Science, Durham University, Durham, UK
- Cancer Division, University College London Hospitals NHS Foundation Trust, London, UK
| | - Pinkie Chambers
- Cancer Division, University College London Hospitals NHS Foundation Trust, London, UK
- School of Pharmacy, University College London, London, UK
| | - Luke Steventon
- Cancer Division, University College London Hospitals NHS Foundation Trust, London, UK
- School of Pharmacy, University College London, London, UK
| | | | | | - Heather Shaw
- Cancer Division, University College London Hospitals NHS Foundation Trust, London, UK
- Mount Vernon Cancer Centre, Northwood, UK
| | - Noura Al Moubayed
- Department of Computer Science, Durham University, Durham, UK
- Evergreen Life Ltd, Manchester, UK
| |
Collapse
|
5
|
Jones MA, Zhang K, Faiz R, Islam W, Jo J, Zheng B, Qiu Y. Utilizing Pseudo Color Image to Improve the Performance of Deep Transfer Learning-Based Computer-Aided Diagnosis Schemes in Breast Mass Classification. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01237-0. [PMID: 39455542 DOI: 10.1007/s10278-024-01237-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 07/15/2024] [Accepted: 08/14/2024] [Indexed: 10/28/2024]
Abstract
The purpose of this study is to investigate the impact of using morphological information in classifying suspicious breast lesions. The widespread use of deep transfer learning can significantly improve the performance of the mammogram based CADx schemes. However, digital mammograms are grayscale images, while deep learning models are typically optimized using the natural images containing three channels. Thus, it is needed to convert the grayscale mammograms into three channel images for the input of deep transfer models. This study aims to develop a novel pseudo color image generation method which utilizes the mass contour information to enhance the classification performance. Accordingly, a total of 830 breast cancer cases were retrospectively collected, which contains 310 benign and 520 malignant cases, respectively. For each case, a total of four regions of interest (ROI) are collected from the grayscale images captured for both the CC and MLO views of the two breasts. Meanwhile, a total of seven pseudo color image sets are generated as the input of the deep learning models, which are created through a combination of the original grayscale image, a histogram equalized image, a bilaterally filtered image, and a segmented mass. Accordingly, the output features from four identical pre-trained deep learning models are concatenated and then processed by a support vector machine-based classifier to generate the final benign/malignant labels. The performance of each image set was evaluated and compared. The results demonstrate that the pseudo color sets containing the manually segmented mass performed significantly better than all other pseudo color sets, which achieved an AUC (area under the ROC curve) up to 0.889 ± 0.012 and an overall accuracy up to 0.816 ± 0.020, respectively. At the same time, the performance improvement is also dependent on the accuracy of the mass segmentation. The results of this study support our hypothesis that adding accurately segmented mass contours can provide complementary information, thereby enhancing the performance of the deep transfer model in classifying suspicious breast lesions.
Collapse
Affiliation(s)
- Meredith A Jones
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK, 73019, USA
| | - Ke Zhang
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK, 73019, USA
| | - Rowzat Faiz
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, 73019, USA
| | - Warid Islam
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, 73019, USA
| | - Javier Jo
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, 73019, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, 73019, USA
| | - Yuchen Qiu
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK, 73019, USA.
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, 73019, USA.
| |
Collapse
|
6
|
Han B, Sun L, Li C, Yu Z, Jiang W, Liu W, Tao D, Liu B. Deep Location Soft-Embedding-Based Network With Regional Scoring for Mammogram Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3137-3148. [PMID: 38625766 DOI: 10.1109/tmi.2024.3389661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/18/2024]
Abstract
Early detection and treatment of breast cancer can significantly reduce patient mortality, and mammogram is an effective method for early screening. Computer-aided diagnosis (CAD) of mammography based on deep learning can assist radiologists in making more objective and accurate judgments. However, existing methods often depend on datasets with manual segmentation annotations. In addition, due to the large image sizes and small lesion proportions, many methods that do not use region of interest (ROI) mostly rely on multi-scale and multi-feature fusion models. These shortcomings increase the labor, money, and computational overhead of applying the model. Therefore, a deep location soft-embedding-based network with regional scoring (DLSEN-RS) is proposed. DLSEN-RS is an end-to-end mammography image classification method containing only one feature extractor and relies on positional embedding (PE) and aggregation pooling (AP) modules to locate lesion areas without bounding boxes, transfer learning, or multi-stage training. In particular, the introduced PE and AP modules exhibit versatility across various CNN models and improve the model's tumor localization and diagnostic accuracy for mammography images. Experiments are conducted on published INbreast and CBIS-DDSM datasets, and compared to previous state-of-the-art mammographic image classification methods, DLSEN-RS performed satisfactorily.
Collapse
|
7
|
Szymaszek P, Tyszka-Czochara M, Ortyl J. Application of Photoactive Compounds in Cancer Theranostics: Review on Recent Trends from Photoactive Chemistry to Artificial Intelligence. Molecules 2024; 29:3164. [PMID: 38999115 PMCID: PMC11243723 DOI: 10.3390/molecules29133164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 06/14/2024] [Accepted: 06/25/2024] [Indexed: 07/14/2024] Open
Abstract
According to the World Health Organization (WHO) and the International Agency for Research on Cancer (IARC), the number of cancer cases and deaths worldwide is predicted to nearly double by 2030, reaching 21.7 million cases and 13 million fatalities. The increase in cancer mortality is due to limitations in the diagnosis and treatment options that are currently available. The close relationship between diagnostics and medicine has made it possible for cancer patients to receive precise diagnoses and individualized care. This article discusses newly developed compounds with potential for photodynamic therapy and diagnostic applications, as well as those already in use. In addition, it discusses the use of artificial intelligence in the analysis of diagnostic images obtained using, among other things, theranostic agents.
Collapse
Affiliation(s)
- Patryk Szymaszek
- Department of Biotechnology and Physical Chemistry, Faculty of Chemical Engineering and Technology, Cracow University of Technology, Warszawska 24, 31-155 Kraków, Poland
| | | | - Joanna Ortyl
- Department of Biotechnology and Physical Chemistry, Faculty of Chemical Engineering and Technology, Cracow University of Technology, Warszawska 24, 31-155 Kraków, Poland
- Photo HiTech Ltd., Bobrzyńskiego 14, 30-348 Kraków, Poland
- Photo4Chem Ltd., Juliusza Lea 114/416A-B, 31-133 Cracow, Poland
| |
Collapse
|
8
|
Bhalla D, Rangarajan K, Chandra T, Banerjee S, Arora C. Reproducibility and Explainability of Deep Learning in Mammography: A Systematic Review of Literature. Indian J Radiol Imaging 2024; 34:469-487. [PMID: 38912238 PMCID: PMC11188703 DOI: 10.1055/s-0043-1775737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024] Open
Abstract
Background Although abundant literature is currently available on the use of deep learning for breast cancer detection in mammography, the quality of such literature is widely variable. Purpose To evaluate published literature on breast cancer detection in mammography for reproducibility and to ascertain best practices for model design. Methods The PubMed and Scopus databases were searched to identify records that described the use of deep learning to detect lesions or classify images into cancer or noncancer. A modification of Quality Assessment of Diagnostic Accuracy Studies (mQUADAS-2) tool was developed for this review and was applied to the included studies. Results of reported studies (area under curve [AUC] of receiver operator curve [ROC] curve, sensitivity, specificity) were recorded. Results A total of 12,123 records were screened, of which 107 fit the inclusion criteria. Training and test datasets, key idea behind model architecture, and results were recorded for these studies. Based on mQUADAS-2 assessment, 103 studies had high risk of bias due to nonrepresentative patient selection. Four studies were of adequate quality, of which three trained their own model, and one used a commercial network. Ensemble models were used in two of these. Common strategies used for model training included patch classifiers, image classification networks (ResNet in 67%), and object detection networks (RetinaNet in 67%). The highest reported AUC was 0.927 ± 0.008 on a screening dataset, while it reached 0.945 (0.919-0.968) on an enriched subset. Higher values of AUC (0.955) and specificity (98.5%) were reached when combined radiologist and Artificial Intelligence readings were used than either of them alone. None of the studies provided explainability beyond localization accuracy. None of the studies have studied interaction between AI and radiologist in a real world setting. Conclusion While deep learning holds much promise in mammography interpretation, evaluation in a reproducible clinical setting and explainable networks are the need of the hour.
Collapse
Affiliation(s)
- Deeksha Bhalla
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Krithika Rangarajan
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Tany Chandra
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Subhashis Banerjee
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| | - Chetan Arora
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| |
Collapse
|
9
|
Zhao SH, Ji XY, Yuan GZ, Cheng T, Liang HY, Liu SQ, Yang FY, Tang Y, Shi S. A Bibliometric Analysis of the Spatial Transcriptomics Literature from 2006 to 2023. Cell Mol Neurobiol 2024; 44:50. [PMID: 38856921 PMCID: PMC11164738 DOI: 10.1007/s10571-024-01484-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 05/28/2024] [Indexed: 06/11/2024]
Abstract
In recent years, spatial transcriptomics (ST) research has become a popular field of study and has shown great potential in medicine. However, there are few bibliometric analyses in this field. Thus, in this study, we aimed to find and analyze the frontiers and trends of this medical research field based on the available literature. A computerized search was applied to the WoSCC (Web of Science Core Collection) Database for literature published from 2006 to 2023. Complete records of all literature and cited references were extracted and screened. The bibliometric analysis and visualization were performed using CiteSpace, VOSviewer, Bibliometrix R Package software, and Scimago Graphica. A total of 1467 papers and reviews were included. The analysis revealed that the ST publication and citation results have shown a rapid upward trend over the last 3 years. Nature Communications and Nature were the most productive and most co-cited journals, respectively. In the comprehensive global collaborative network, the United States is the country with the most organizations and publications, followed closely by China and the United Kingdom. The author Joakim Lundeberg published the most cited paper, while Patrik L. Ståhl ranked first among co-cited authors. The hot topics in ST are tissue recognition, cancer, heterogeneity, immunotherapy, differentiation, and models. ST technologies have greatly contributed to in-depth research in medical fields such as oncology and neuroscience, opening up new possibilities for the diagnosis and treatment of diseases. Moreover, artificial intelligence and big data drive additional development in ST fields.
Collapse
Affiliation(s)
- Shu-Han Zhao
- Guang'an Men Hospital, China Academy of Chinese Medical Sciences, No. 5 Beixiange Street, Xicheng District, Beijing, 100053, People's Republic of China
- Beijing University of Chinese Medicine, No. 11, Beisanhuan East Road, Chaoyang District, Beijing, 100029, People's Republic of China
| | - Xin-Yu Ji
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, No. 16 Nanxiaojie, Dongzhimennei Ave, Beijing, 100700, People's Republic of China
| | - Guo-Zhen Yuan
- Guang'an Men Hospital, China Academy of Chinese Medical Sciences, No. 5 Beixiange Street, Xicheng District, Beijing, 100053, People's Republic of China
| | - Tao Cheng
- Guang'an Men Hospital, China Academy of Chinese Medical Sciences, No. 5 Beixiange Street, Xicheng District, Beijing, 100053, People's Republic of China
| | - Hai-Yi Liang
- Beijing University of Chinese Medicine, No. 11, Beisanhuan East Road, Chaoyang District, Beijing, 100029, People's Republic of China
| | - Si-Qi Liu
- Beijing University of Chinese Medicine, No. 11, Beisanhuan East Road, Chaoyang District, Beijing, 100029, People's Republic of China
| | - Fu-Yi Yang
- Beijing University of Chinese Medicine, No. 11, Beisanhuan East Road, Chaoyang District, Beijing, 100029, People's Republic of China
| | - Yang Tang
- School of Chinese Medicine, Beijing University of Chinese Medicine, No. 11, Beisanhuan East Road, Chaoyang District, Beijing, 100029, People's Republic of China.
| | - Shuai Shi
- Guang'an Men Hospital, China Academy of Chinese Medical Sciences, No. 5 Beixiange Street, Xicheng District, Beijing, 100053, People's Republic of China.
| |
Collapse
|
10
|
Al-Karawi D, Al-Zaidi S, Helael KA, Obeidat N, Mouhsen AM, Ajam T, Alshalabi BA, Salman M, Ahmed MH. A Review of Artificial Intelligence in Breast Imaging. Tomography 2024; 10:705-726. [PMID: 38787015 PMCID: PMC11125819 DOI: 10.3390/tomography10050055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 04/14/2024] [Accepted: 05/06/2024] [Indexed: 05/25/2024] Open
Abstract
With the increasing dominance of artificial intelligence (AI) techniques, the important prospects for their application have extended to various medical fields, including domains such as in vitro diagnosis, intelligent rehabilitation, medical imaging, and prognosis. Breast cancer is a common malignancy that critically affects women's physical and mental health. Early breast cancer screening-through mammography, ultrasound, or magnetic resonance imaging (MRI)-can substantially improve the prognosis for breast cancer patients. AI applications have shown excellent performance in various image recognition tasks, and their use in breast cancer screening has been explored in numerous studies. This paper introduces relevant AI techniques and their applications in the field of medical imaging of the breast (mammography and ultrasound), specifically in terms of identifying, segmenting, and classifying lesions; assessing breast cancer risk; and improving image quality. Focusing on medical imaging for breast cancer, this paper also reviews related challenges and prospects for AI.
Collapse
Affiliation(s)
- Dhurgham Al-Karawi
- Medical Analytica Ltd., 26a Castle Park Industrial Park, Flint CH6 5XA, UK;
| | - Shakir Al-Zaidi
- Medical Analytica Ltd., 26a Castle Park Industrial Park, Flint CH6 5XA, UK;
| | - Khaled Ahmad Helael
- Royal Medical Services, King Hussein Medical Hospital, King Abdullah II Ben Al-Hussein Street, Amman 11855, Jordan;
| | - Naser Obeidat
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Abdulmajeed Mounzer Mouhsen
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Tarek Ajam
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Bashar A. Alshalabi
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Mohamed Salman
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Mohammed H. Ahmed
- School of Computing, Coventry University, 3 Gulson Road, Coventry CV1 5FB, UK;
| |
Collapse
|
11
|
Chen L, Zhou Y, Xu S. ERetinaNet: An Efficient Neural Network Based on RetinaNet for Mammographic Breast Mass Detection. IEEE J Biomed Health Inform 2024; 28:2866-2878. [PMID: 38427552 DOI: 10.1109/jbhi.2024.3371229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2024]
Abstract
Mammography is an effective method for diagnosing breast diseases, and computer-aided detection (CAD) systems play an important role in the detection of breast masses. However, low contrast and the interference of surrounding tissues make the detection of masses challenging. In this paper, an efficient RetinaNet network named ERetinaNet is proposed to improve the accuracy and inference speed of mammographic breast mass detection. Efficient modules are designed and introduced into the network to facilitate the extraction of comprehensive features, while the structure of the network is simplified to improve the inference speed. A Faster RepVGG (FRepVGG) architecture is first proposed as the backbone network that utilizes three effective strategies: 1) The multi-branch structure used during training enhances learning, and it is equivalently converted to a single-path structure during inference by re-parameterization technique to accelerate the detection speed. 2) The Extraction operation is proposed to condense the features of intermediate layers. 3) An effective Multi-spectral Channel Attention (eMCA) module is added in the last layer of each stage, enabling the network to pay more attention to the target region. In addition, Vision Transformer (ViT) is added to ERetinaNet, which enables ERetinaNet to learn global semantic information. The detection head is simplified to make ERetinaNet more efficient. The experimental results show that compared with the original RetinaNet, ERetinaNet improves the mean Average Precision (mAP) from 79.16% to 85.01% and significantly shortens the inference time. Moreover, the detection accuracy of ERetinaNet outperforms other excellent object detection networks, such as Faster R-CNN, SSD, YOLOv3 and YOLOv7.
Collapse
|
12
|
Bai J, Jin A, Adams M, Yang C, Nabavi S. Unsupervised feature correlation model to predict breast abnormal variation maps in longitudinal mammograms. Comput Med Imaging Graph 2024; 113:102341. [PMID: 38277769 DOI: 10.1016/j.compmedimag.2024.102341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 01/18/2024] [Accepted: 01/18/2024] [Indexed: 01/28/2024]
Abstract
Breast cancer continues to be a significant cause of mortality among women globally. Timely identification and precise diagnosis of breast abnormalities are critical for enhancing patient prognosis. In this study, we focus on improving the early detection and accurate diagnosis of breast abnormalities, which is crucial for improving patient outcomes and reducing the mortality rate of breast cancer. To address the limitations of traditional screening methods, a novel unsupervised feature correlation network was developed to predict maps indicating breast abnormal variations using longitudinal 2D mammograms. The proposed model utilizes the reconstruction process of current year and prior year mammograms to extract tissue from different areas and analyze the differences between them to identify abnormal variations that may indicate the presence of cancer. The model incorporates a feature correlation module, an attention suppression gate, and a breast abnormality detection module, all working together to improve prediction accuracy. The proposed model not only provides breast abnormal variation maps but also distinguishes between normal and cancer mammograms, making it more advanced compared to the state-of-the-art baseline models. The results of the study show that the proposed model outperforms the baseline models in terms of Accuracy, Sensitivity, Specificity, Dice score, and cancer detection rate.
Collapse
Affiliation(s)
- Jun Bai
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT 06269, USA
| | - Annie Jin
- University of Connecticut School of Medicine, 263 Farmington Ave. Farmington, CT 06030, USA
| | - Madison Adams
- University of Connecticut School of Medicine, 263 Farmington Ave. Farmington, CT 06030, USA
| | - Clifford Yang
- University of Connecticut School of Medicine, 263 Farmington Ave. Farmington, CT 06030, USA; Department of Radiology, UConn Health, 263 Farmington Ave. Farmington, CT 06030, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT 06269, USA.
| |
Collapse
|
13
|
Jiang C, Jiang F, Xie Z, Sun J, Sun Y, Zhang M, Zhou J, Feng Q, Zhang G, Xing K, Mei H, Li J. Evaluation of automated detection of head position on lateral cephalometric radiographs based on deep learning techniques. Ann Anat 2023; 250:152114. [PMID: 37302431 DOI: 10.1016/j.aanat.2023.152114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 05/13/2023] [Accepted: 05/20/2023] [Indexed: 06/13/2023]
Abstract
BACKGROUND Lateral cephalometric radiograph (LCR) is crucial to diagnosis and treatment planning of maxillofacial diseases, but inappropriate head position, which reduces the accuracy of cephalometric measurements, can be challenging to detect for clinicians. This non-interventional retrospective study aims to develop two deep learning (DL) systems to efficiently, accurately, and instantly detect the head position on LCRs. METHODS LCRs from 13 centers were reviewed and a total of 3000 radiographs were collected and divided into 2400 cases (80.0 %) in the training set and 600 cases (20.0 %) in the validation set. Another 300 cases were selected independently as the test set. All the images were evaluated and landmarked by two board-certified orthodontists as references. The head position of the LCR was classified by the angle between the Frankfort Horizontal (FH) plane and the true horizontal (HOR) plane, and a value within - 3°- 3° was considered normal. The YOLOv3 model based on the traditional fixed-point method and the modified ResNet50 model featuring a non-linear mapping residual network were constructed and evaluated. Heatmap was generated to visualize the performances. RESULTS The modified ResNet50 model showed a superior classification accuracy of 96.0 %, higher than 93.5 % of the YOLOv3 model. The sensitivity&recall and specificity of the modified ResNet50 model were 0.959, 0.969, and those of the YOLOv3 model were 0.846, 0.916. The area under the curve (AUC) values of the modified ResNet50 and the YOLOv3 model were 0.985 ± 0.04 and 0.942 ± 0.042, respectively. Saliency maps demonstrated that the modified ResNet50 model considered the alignment of cervical vertebras, not just the periorbital and perinasal areas, as the YOLOv3 model did. CONCLUSIONS The modified ResNet50 model outperformed the YOLOv3 model in classifying head position on LCRs and showed promising potential in facilitating making accurate diagnoses and optimal treatment plans.
Collapse
Affiliation(s)
- Chen Jiang
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Fulin Jiang
- Chongqing University Three Gorges Hospital, Chongqing 404031, China
| | - Zhuokai Xie
- University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Jikui Sun
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Yan Sun
- University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Mei Zhang
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Jiawei Zhou
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Qingchen Feng
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Guanning Zhang
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Ke Xing
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Hongxiang Mei
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China
| | - Juan Li
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu 610041, China.
| |
Collapse
|
14
|
Xu M, Chen Z, Zheng J, Zhao Q, Yuan Z. Artificial Intelligence-Aided Optical Imaging for Cancer Theranostics. Semin Cancer Biol 2023:S1044-579X(23)00094-9. [PMID: 37302519 DOI: 10.1016/j.semcancer.2023.06.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 06/08/2023] [Accepted: 06/08/2023] [Indexed: 06/13/2023]
Abstract
The use of artificial intelligence (AI) to assist biomedical imaging have demonstrated its high accuracy and high efficiency in medical decision-making for individualized cancer medicine. In particular, optical imaging methods are able to visualize both the structural and functional information of tumors tissues with high contrast, low cost, and noninvasive property. However, no systematic work has been performed to inspect the recent advances on AI-aided optical imaging for cancer theranostics. In this review, we demonstrated how AI can guide optical imaging methods to improve the accuracy on tumor detection, automated analysis and prediction of its histopathological section, its monitoring during treatment, and its prognosis by using computer vision, deep learning and natural language processing. By contrast, the optical imaging techniques involved mainly consisted of various tomography and microscopy imaging methods such as optical endoscopy imaging, optical coherence tomography, photoacoustic imaging, diffuse optical tomography, optical microscopy imaging, Raman imaging, and fluorescent imaging. Meanwhile, existing problems, possible challenges and future prospects for AI-aided optical imaging protocol for cancer theranostics were also discussed. It is expected that the present work can open a new avenue for precision oncology by using AI and optical imaging tools.
Collapse
Affiliation(s)
- Mengze Xu
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai, China; Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China; Centre for Cognitive and Brain Sciences, University of Macau, Macau SAR, China
| | - Zhiyi Chen
- Institute of Medical Imaging, Hengyang Medical School, University of South China, Hengyang, China
| | - Junxiao Zheng
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China; Centre for Cognitive and Brain Sciences, University of Macau, Macau SAR, China
| | - Qi Zhao
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China
| | - Zhen Yuan
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China; Centre for Cognitive and Brain Sciences, University of Macau, Macau SAR, China.
| |
Collapse
|
15
|
Chen X, Guo J, Ye J, Zhang M, Liang Y. Detection of Proximal Caries Lesions on Bitewing Radiographs Using Deep Learning Method. Caries Res 2023; 56:455-463. [PMID: 36215971 PMCID: PMC9932834 DOI: 10.1159/000527418] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 10/03/2022] [Indexed: 11/19/2022] Open
Abstract
This study aimed to evaluate the validity of a deep learning-based convolutional neural network (CNN) for detecting proximal caries lesions on bitewing radiographs. A total of 978 bitewing radiographs, 10,899 proximal surfaces, were evaluated by two endodontists and a radiologist, of which 2,719 surfaces were diagnosed and annotated with proximal caries and 8,180 surfaces were sound. The data were randomly divided into two datasets, with 818 bitewings in the training and validation dataset and 160 bitewings in the test dataset. Each annotation in the test set was then classified into 5 stages according to the extent of the lesion (E1, E2, D1, D2, D3). Faster R-CNN, a deep learning-based object detection method, was trained to detect proximal caries in the training and validation dataset and then was assessed on the test dataset. The diagnostic accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and receiver operating characteristic curve were calculated. The performance of the network in the overall and different stages of lesions was compared with that of postgraduate students on the test dataset. A total of 388 carious lesions and 1,435 sound surfaces were correctly identified by the neural network; hence, the accuracy was 0.87. Furthermore, 27.6% of lesions went undetected, and 7% of sound surfaces were misdiagnosed by the neural network. The sensitivity, specificity, PPV, and NPV of the neural network were 0.72, 0.93, 0.77, and 0.91, respectively. In contrast with the network, 52.8% of lesions went undetected by the students, yielding a sensitivity of only 0.47. The F1-score of the students was 0.57, while the F1-score of the network was 0.74 despite the accuracy of 0.82. A significant difference in the sensitivity was found between the model and the postgraduate students when detecting different stages of lesions (p < 0.05). For early lesions which limited in enamel and the outer third of dentin, the neural network had sensitivities all above or at 0.65, while students showed sensitivities below 0.40. From our results, we conclude that the CNN may be an assistant in detecting proximal caries on bitewings.
Collapse
Affiliation(s)
- Xiaotong Chen
- Department of Cariology and Endodontology, Peking University School and Hospital of Stomatology and National Clinical Research Center for Oral Diseases and National Engineering Research of Oral Biomaterials and Digital Medical Devices and Beijing Key Laboratory of Digital Stomatology, Beijing, China
| | - Jiachang Guo
- Intelligent Healthcare Unit, Beijing Baidu Netcom Science Technology Company Limited, Beijing, China
| | - Jiaxue Ye
- Department of Cariology and Endodontology, Peking University School and Hospital of Stomatology and National Clinical Research Center for Oral Diseases and National Engineering Research of Oral Biomaterials and Digital Medical Devices and Beijing Key Laboratory of Digital Stomatology, Beijing, China
| | - Mingming Zhang
- Department of Cariology and Endodontology, Peking University School and Hospital of Stomatology and National Clinical Research Center for Oral Diseases and National Engineering Research of Oral Biomaterials and Digital Medical Devices and Beijing Key Laboratory of Digital Stomatology, Beijing, China
| | - Yuhong Liang
- Department of Cariology and Endodontology, Peking University School and Hospital of Stomatology and National Clinical Research Center for Oral Diseases and National Engineering Research of Oral Biomaterials and Digital Medical Devices and Beijing Key Laboratory of Digital Stomatology, Beijing, China,Department of Stomatology, Peking University International Hospital, Beijing, China,*Yuhong Liang,
| |
Collapse
|
16
|
Artificial Intelligence (AI) in Breast Imaging: A Scientometric Umbrella Review. Diagnostics (Basel) 2022; 12:diagnostics12123111. [PMID: 36553119 PMCID: PMC9777253 DOI: 10.3390/diagnostics12123111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/07/2022] [Accepted: 12/08/2022] [Indexed: 12/14/2022] Open
Abstract
Artificial intelligence (AI), a rousing advancement disrupting a wide spectrum of applications with remarkable betterment, has continued to gain momentum over the past decades. Within breast imaging, AI, especially machine learning and deep learning, honed with unlimited cross-data/case referencing, has found great utility encompassing four facets: screening and detection, diagnosis, disease monitoring, and data management as a whole. Over the years, breast cancer has been the apex of the cancer cumulative risk ranking for women across the six continents, existing in variegated forms and offering a complicated context in medical decisions. Realizing the ever-increasing demand for quality healthcare, contemporary AI has been envisioned to make great strides in clinical data management and perception, with the capability to detect indeterminate significance, predict prognostication, and correlate available data into a meaningful clinical endpoint. Here, the authors captured the review works over the past decades, focusing on AI in breast imaging, and systematized the included works into one usable document, which is termed an umbrella review. The present study aims to provide a panoramic view of how AI is poised to enhance breast imaging procedures. Evidence-based scientometric analysis was performed in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guideline, resulting in 71 included review works. This study aims to synthesize, collate, and correlate the included review works, thereby identifying the patterns, trends, quality, and types of the included works, captured by the structured search strategy. The present study is intended to serve as a "one-stop center" synthesis and provide a holistic bird's eye view to readers, ranging from newcomers to existing researchers and relevant stakeholders, on the topic of interest.
Collapse
|
17
|
Fan J, Qin B, Gu F, Wang Z, Liu X, Zhu Q, Yang J. Automatic Detection of Horner Syndrome by Using Facial Images. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:8670350. [PMID: 36451761 PMCID: PMC9705100 DOI: 10.1155/2022/8670350] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 10/14/2022] [Accepted: 10/19/2022] [Indexed: 10/17/2023]
Abstract
Horner syndrome is a clinical constellation that presents with miosis, ptosis, and facial anhidrosis. It is important as a warning sign of the damaged oculosympathetic chain, potentially with serious causes. However, the diagnosis of Horner syndrome is operator dependent and subjective. This study aims to present an objective method that can recognize Horner sign from facial photos and verify its accuracy. A total of 173 images were collected, annotated, and divided into training and testing groups. Two types of classifiers were trained (two-stage classifier and one-stage classifier). The two-stage method utilized the MediaPipe face mesh to estimate the coordinates of landmarks and generate facial geometric features accordingly. Then, ten machine learning classifiers were trained based on this. The one-stage classifier was trained based on one of the latest algorithms, YOLO v5. The performance of the classifier was evaluated by the diagnosis accuracy, sensitivity, and specificity. For the two-stage model, the MediaPipe successfully detected 92.2% of images in the testing group, and the Decision Tree Classifier presented the highest accuracy (0.790). The sensitivity and specificity of this classifier were 0.432 and 0.970, respectively. As for the one-stage classifier, the accuracy, sensitivity, and specificity were 0.65, 0.51, and 0.84, respectively. The results of this study proved the possibility of automatic detection of Horner syndrome from images. This tool could work as a second advisor for neurologists by reducing subjectivity and increasing accuracy in diagnosing Horner syndrome.
Collapse
Affiliation(s)
- Jingyuan Fan
- Department of Microsurgery Orthopedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou 510080, China
| | - Bengang Qin
- Department of Microsurgery Orthopedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou 510080, China
- Guangdong Province Engineering Laboratory for Soft Tissue Biofabrication, Guangzhou 510080, China
- Guangdong Provincial Key Laboratory for Orthopedics and Traumatology, Guangzhou 510080, China
| | - Fanbin Gu
- Department of Microsurgery Orthopedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou 510080, China
| | - Zhaoyang Wang
- Department of Microsurgery Orthopedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou 510080, China
| | - Xiaolin Liu
- Department of Microsurgery Orthopedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou 510080, China
- Guangdong Province Engineering Laboratory for Soft Tissue Biofabrication, Guangzhou 510080, China
- Guangdong Provincial Key Laboratory for Orthopedics and Traumatology, Guangzhou 510080, China
| | - Qingtang Zhu
- Department of Microsurgery Orthopedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou 510080, China
- Guangdong Province Engineering Laboratory for Soft Tissue Biofabrication, Guangzhou 510080, China
- Guangdong Provincial Key Laboratory for Orthopedics and Traumatology, Guangzhou 510080, China
| | - Jiantao Yang
- Department of Microsurgery Orthopedic Trauma and Hand Surgery, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou 510080, China
- Guangdong Province Engineering Laboratory for Soft Tissue Biofabrication, Guangzhou 510080, China
- Guangdong Provincial Key Laboratory for Orthopedics and Traumatology, Guangzhou 510080, China
| |
Collapse
|
18
|
Smolen JA, Wooley KL. Fluorescence lifetime image microscopy prediction with convolutional neural networks for cell detection and classification in tissues. PNAS NEXUS 2022; 1:pgac235. [PMID: 36712353 PMCID: PMC9802238 DOI: 10.1093/pnasnexus/pgac235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 10/11/2022] [Indexed: 12/05/2022]
Abstract
Convolutional neural networks (CNNs) and other deep-learning models have proven to be transformative tools for the automated analysis of microscopy images, particularly in the domain of cellular and tissue imaging. These computer-vision models have primarily been applied with traditional microscopy imaging modalities (e.g. brightfield and fluorescence), likely due to the availability of large datasets in these regimes. However, more advanced microscopy imaging techniques could, potentially, allow for improved model performance in various computational histopathology tasks. In this work, we demonstrate that CNNs can achieve high accuracy in cell detection and classification without large amounts of data when applied to histology images acquired by fluorescence lifetime imaging microscopy (FLIM). This accuracy is higher than what would be achieved with regular single or dual-channel fluorescence images under the same settings, particularly for CNNs pretrained on publicly available fluorescent cell or general image datasets. Additionally, generated FLIM images could be predicted from just the fluorescence image data by using a dense U-Net CNN model trained on a subset of ground-truth FLIM images. These U-Net CNN generated FLIM images demonstrated high similarity to ground truth and improved accuracy in cell detection and classification over fluorescence alone when used as input to a variety of commonly used CNNs. This improved accuracy was maintained even when the FLIM images were generated by a U-Net CNN trained on only a few example FLIM images.
Collapse
Affiliation(s)
- Justin A Smolen
- Departments of Chemistry, Chemical Engineering, and Materials Science and Engineering, Texas A&M University, College Station, TX 77842, USA
| | - Karen L Wooley
- Departments of Chemistry, Chemical Engineering, and Materials Science and Engineering, Texas A&M University, College Station, TX 77842, USA
| |
Collapse
|
19
|
Carreras J, Roncador G, Hamoudi R. Artificial Intelligence Predicted Overall Survival and Classified Mature B-Cell Neoplasms Based on Immuno-Oncology and Immune Checkpoint Panels. Cancers (Basel) 2022; 14:5318. [PMID: 36358737 PMCID: PMC9657332 DOI: 10.3390/cancers14215318] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/20/2022] [Accepted: 10/24/2022] [Indexed: 08/01/2023] Open
Abstract
Artificial intelligence (AI) can identify actionable oncology biomarkers. This research integrates our previous analyses of non-Hodgkin lymphoma. We used gene expression and immunohistochemical data, focusing on the immune checkpoint, and added a new analysis of macrophages, including 3D rendering. The AI comprised machine learning (C5, Bayesian network, C&R, CHAID, discriminant analysis, KNN, logistic regression, LSVM, Quest, random forest, random trees, SVM, tree-AS, and XGBoost linear and tree) and artificial neural networks (multilayer perceptron and radial basis function). The series included chronic lymphocytic leukemia, mantle cell lymphoma, follicular lymphoma, Burkitt, diffuse large B-cell lymphoma, marginal zone lymphoma, and multiple myeloma, as well as acute myeloid leukemia and pan-cancer series. AI classified lymphoma subtypes and predicted overall survival accurately. Oncogenes and tumor suppressor genes were highlighted (MYC, BCL2, and TP53), along with immune microenvironment markers of tumor-associated macrophages (M2-like TAMs), T-cells and regulatory T lymphocytes (Tregs) (CD68, CD163, MARCO, CSF1R, CSF1, PD-L1/CD274, SIRPA, CD85A/LILRB3, CD47, IL10, TNFRSF14/HVEM, TNFAIP8, IKAROS, STAT3, NFKB, MAPK, PD-1/PDCD1, BTLA, and FOXP3), apoptosis (BCL2, CASP3, CASP8, PARP, and pathway-related MDM2, E2F1, CDK6, MYB, and LMO2), and metabolism (ENO3, GGA3). In conclusion, AI with immuno-oncology markers is a powerful predictive tool. Additionally, a review of recent literature was made.
Collapse
Affiliation(s)
- Joaquim Carreras
- Department of Pathology, School of Medicine, Tokai University, 143 Shimokasuya, Isehara 259-1193, Kanagawa, Japan
| | - Giovanna Roncador
- Monoclonal Antibodies Unit, Spanish National Cancer Research Center (Centro Nacional de Investigaciones Oncologicas, CNIO), Melchor Fernandez Almagro 3, 28029 Madrid, Spain
| | - Rifat Hamoudi
- Department of Clinical Sciences, College of Medicine, University of Sharjah, Sharjah P.O. Box 27272, United Arab Emirates
- Division of Surgery and Interventional Science, University College London, Gower Street, London WC1E 6BT, UK
| |
Collapse
|
20
|
Malliori A, Pallikarakis N. Breast cancer detection using machine learning in digital mammography and breast tomosynthesis: A systematic review. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00693-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
21
|
Basurto-Hurtado JA, Cruz-Albarran IA, Toledano-Ayala M, Ibarra-Manzano MA, Morales-Hernandez LA, Perez-Ramirez CA. Diagnostic Strategies for Breast Cancer Detection: From Image Generation to Classification Strategies Using Artificial Intelligence Algorithms. Cancers (Basel) 2022; 14:3442. [PMID: 35884503 PMCID: PMC9322973 DOI: 10.3390/cancers14143442] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Revised: 07/02/2022] [Accepted: 07/12/2022] [Indexed: 02/04/2023] Open
Abstract
Breast cancer is one the main death causes for women worldwide, as 16% of the diagnosed malignant lesions worldwide are its consequence. In this sense, it is of paramount importance to diagnose these lesions in the earliest stage possible, in order to have the highest chances of survival. While there are several works that present selected topics in this area, none of them present a complete panorama, that is, from the image generation to its interpretation. This work presents a comprehensive state-of-the-art review of the image generation and processing techniques to detect Breast Cancer, where potential candidates for the image generation and processing are presented and discussed. Novel methodologies should consider the adroit integration of artificial intelligence-concepts and the categorical data to generate modern alternatives that can have the accuracy, precision and reliability expected to mitigate the misclassifications.
Collapse
Affiliation(s)
- Jesus A. Basurto-Hurtado
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| | - Irving A. Cruz-Albarran
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| | - Manuel Toledano-Ayala
- División de Investigación y Posgrado de la Facultad de Ingeniería (DIPFI), Universidad Autónoma de Querétaro, Cerro de las Campanas S/N Las Campanas, Santiago de Querétaro 76010, Mexico;
| | - Mario Alberto Ibarra-Manzano
- Laboratorio de Procesamiento Digital de Señales, Departamento de Ingeniería Electrónica, Division de Ingenierias Campus Irapuato-Salamanca (DICIS), Universidad de Guanajuato, Carretera Salamanca-Valle de Santiago KM. 3.5 + 1.8 Km., Salamanca 36885, Mexico;
| | - Luis A. Morales-Hernandez
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
| | - Carlos A. Perez-Ramirez
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| |
Collapse
|
22
|
Torkzadehmahani R, Nasirigerdeh R, Blumenthal DB, Kacprowski T, List M, Matschinske J, Spaeth J, Wenke NK, Baumbach J. Privacy-Preserving Artificial Intelligence Techniques in Biomedicine. Methods Inf Med 2022; 61:e12-e27. [PMID: 35062032 PMCID: PMC9246509 DOI: 10.1055/s-0041-1740630] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 09/18/2021] [Indexed: 12/15/2022]
Abstract
BACKGROUND Artificial intelligence (AI) has been successfully applied in numerous scientific domains. In biomedicine, AI has already shown tremendous potential, e.g., in the interpretation of next-generation sequencing data and in the design of clinical decision support systems. OBJECTIVES However, training an AI model on sensitive data raises concerns about the privacy of individual participants. For example, summary statistics of a genome-wide association study can be used to determine the presence or absence of an individual in a given dataset. This considerable privacy risk has led to restrictions in accessing genomic and other biomedical data, which is detrimental for collaborative research and impedes scientific progress. Hence, there has been a substantial effort to develop AI methods that can learn from sensitive data while protecting individuals' privacy. METHOD This paper provides a structured overview of recent advances in privacy-preserving AI techniques in biomedicine. It places the most important state-of-the-art approaches within a unified taxonomy and discusses their strengths, limitations, and open problems. CONCLUSION As the most promising direction, we suggest combining federated machine learning as a more scalable approach with other additional privacy-preserving techniques. This would allow to merge the advantages to provide privacy guarantees in a distributed way for biomedical applications. Nonetheless, more research is necessary as hybrid approaches pose new challenges such as additional network or computation overhead.
Collapse
Affiliation(s)
- Reihaneh Torkzadehmahani
- Institute for Artificial Intelligence in Medicine and Healthcare, Technical University of Munich, Munich, Germany
| | - Reza Nasirigerdeh
- Institute for Artificial Intelligence in Medicine and Healthcare, Technical University of Munich, Munich, Germany
- Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| | - David B. Blumenthal
- Department of Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander University Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Tim Kacprowski
- Division of Data Science in Biomedicine, Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Medical School Hannover, Braunschweig, Germany
- Braunschweig Integrated Centre of Systems Biology (BRICS), TU Braunschweig, Braunschweig, Germany
| | - Markus List
- Chair of Experimental Bioinformatics, Technical University of Munich, Munich, Germany
| | - Julian Matschinske
- E.U. Horizon2020 FeatureCloud Project Consortium
- Chair of Computational Systems Biology, University of Hamburg, Hamburg, Germany
| | - Julian Spaeth
- E.U. Horizon2020 FeatureCloud Project Consortium
- Chair of Computational Systems Biology, University of Hamburg, Hamburg, Germany
| | - Nina Kerstin Wenke
- E.U. Horizon2020 FeatureCloud Project Consortium
- Chair of Computational Systems Biology, University of Hamburg, Hamburg, Germany
| | - Jan Baumbach
- E.U. Horizon2020 FeatureCloud Project Consortium
- Chair of Computational Systems Biology, University of Hamburg, Hamburg, Germany
- Institute of Mathematics and Computer Science, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
23
|
Ha R, Jairam MP. A review of artificial intelligence in mammography. Clin Imaging 2022; 88:36-44. [DOI: 10.1016/j.clinimag.2022.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Revised: 04/28/2022] [Accepted: 05/12/2022] [Indexed: 11/16/2022]
|
24
|
Oyelade ON, Ezugwu AE. A novel wavelet decomposition and transformation convolutional neural network with data augmentation for breast cancer detection using digital mammogram. Sci Rep 2022; 12:5913. [PMID: 35396565 PMCID: PMC8993803 DOI: 10.1038/s41598-022-09905-3] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 03/23/2022] [Indexed: 12/23/2022] Open
Abstract
Research in deep learning (DL) has continued to provide significant solutions to the challenges of detecting breast cancer in digital images. Image preprocessing methods and architecture enhancement techniques have been proposed to improve the performance of DL models such as convolutional neural networks (CNNs). For instance, the wavelet decomposition function has been used for image feature extraction in CNNs due to its strong compactness. Additionally, CNN architectures have been optimized to improve the process of feature detection to support the classification process. However, these approaches still lack completeness, as no mechanism exists to discriminate features to be enhanced and features to be eliminated for feature enhancement. More so, no studies have approached the use of wavelet transform to restructure CNN architectures to improve the detection of discriminant features in digital mammography for increased classification accuracy. Therefore, this study addresses these problems through wavelet-CNN-wavelet architecture. The approach presented in this paper combines seam carving and wavelet decomposition algorithms for image preprocessing to find discriminative features. These features are passed as input to a CNN-wavelet structure that uses the new wavelet transformation function proposed in this paper. The CNN-wavelet architecture applied layers of wavelet transform and reduced feature maps to obtain features suggestive of abnormalities that support the classification process. Meanwhile, we synthesized image samples with architectural distortion using a generative adversarial network (GAN) model to argue for their training datasets' insufficiency. Experimentation of the proposed method was carried out using DDSM + CBIS and MIAS datasets. The results obtained showed that the new method improved the classification accuracy and lowered the loss function values. The study's findings demonstrate the usefulness of the wavelet transform function in restructuring CNN architectures for performance enhancement in detecting abnormalities leading to breast cancer in digital mammography.
Collapse
Affiliation(s)
- Olaide N Oyelade
- School of Mathematics, Statistics, and Computer Science, University of KwaZulu-Natal, King Edward Avenue, Pietermaritzburg Campus, Pietermaritzburg, 3201, KwaZulu-Natal, South Africa
- Depratment of Computer Science, Ahmadu Bello University Zaria-Nigeria, Zaria, Nigeria
| | - Absalom E Ezugwu
- School of Mathematics, Statistics, and Computer Science, University of KwaZulu-Natal, King Edward Avenue, Pietermaritzburg Campus, Pietermaritzburg, 3201, KwaZulu-Natal, South Africa.
| |
Collapse
|
25
|
Bai J, Jin A, Wang T, Yang C, Nabavi S. Feature fusion siamese network for breast cancer detection comparing current and prior mammograms. Med Phys 2022; 49:3654-3669. [PMID: 35271746 DOI: 10.1002/mp.15598] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 02/08/2022] [Accepted: 03/01/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Automatic detection of very small and non-mass abnormalities from mammogram images has remained challenging. In clinical practice for each patient, radiologists commonly not only screen the mammogram images obtained during the examination, but also compare them with previous mammogram images to make a clinical decision. To design an AI system to mimic radiologists for better cancer detection, in this work we proposed an end-to-end enhanced Siamese convolutional neural network to detect breast cancer using previous year and current year mammogram images. METHODS The proposed Siamese based network uses high resolution mammogram images and fuses features of pairs of previous year and current year mammogram images to predict cancer probabilities. The proposed approach is developed based on the concept of one-shot learning that learns the abnormal differences between current and prior images instead of abnormal objects, and as a result can perform better with small sample size data sets. We developed two variants of the proposed network. In the first model, to fuse the features of current and previous images, we designed an enhanced distance learning network that considers not only the overall distance, but also the pixel-wise distances between the features. In the other model, we concatenated the features of current and previous images to fuse them. RESULTS We compared the performance of the proposed models with those of some baseline models that use current images only (ResNet and VGG) and also use current and prior images (LSTM and vanilla Siamese) in terms of accuracy, sensitivity, precision, F1 score and AUC. Results show that the proposed models outperform the baseline models and the proposed model with the distance learning network performs the best (accuracy: 0.92, sensitivity: 0.93, precision: 0.91, specificity: 0.91, F1: 0.92 and AUC: 0.95). CONCLUSIONS Integrating prior mammogram images improves automatic cancer classification, specially for very small and non-mass abnormalities. For classification models that integrate current and prior mammogram images, using an enhanced and effective distance learning network can advance the performance of the models. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Jun Bai
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| | - Annie Jin
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| | - Tianyu Wang
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| | - Clifford Yang
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| |
Collapse
|
26
|
Mahmood T, Li J, Pei Y, Akhtar F, Rehman MU, Wasti SH. Breast lesions classifications of mammographic images using a deep convolutional neural network-based approach. PLoS One 2022; 17:e0263126. [PMID: 35085352 PMCID: PMC8794221 DOI: 10.1371/journal.pone.0263126] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 01/12/2022] [Indexed: 11/18/2022] Open
Abstract
Breast cancer is one of the worst illnesses, with a higher fatality rate among women globally. Breast cancer detection needs accurate mammography interpretation and analysis, which is challenging for radiologists owing to the intricate anatomy of the breast and low image quality. Advances in deep learning-based models have significantly improved breast lesions’ detection, localization, risk assessment, and categorization. This study proposes a novel deep learning-based convolutional neural network (ConvNet) that significantly reduces human error in diagnosing breast malignancy tissues. Our methodology is most effective in eliciting task-specific features, as feature learning is coupled with classification tasks to achieve higher performance in automatically classifying the suspicious regions in mammograms as benign and malignant. To evaluate the model’s validity, 322 raw mammogram images from Mammographic Image Analysis Society (MIAS) and 580 from Private datasets were obtained to extract in-depth features, the intensity of information, and the high likelihood of malignancy. Both datasets are magnificently improved through preprocessing, synthetic data augmentation, and transfer learning techniques to attain the distinctive combination of breast tumors. The experimental findings indicate that the proposed approach achieved remarkable training accuracy of 0.98, test accuracy of 0.97, high sensitivity of 0.99, and an AUC of 0.99 in classifying breast masses on mammograms. The developed model achieved promising performance that helps the clinician in the speedy computation of mammography, breast masses diagnosis, treatment planning, and follow-up of disease progression. Moreover, it has the immense potential over retrospective approaches in consistency feature extraction and precise lesions classification.
Collapse
Affiliation(s)
- Tariq Mahmood
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Division of Science and Technology, Department of Information Sciences, University of Education, Lahore, Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu, Fukushima, Japan
- * E-mail:
| | - Faheem Akhtar
- Department of Computer Science, Sukkur IBA University, Sukkur, Pakistan
| | - Mujeeb Ur Rehman
- Radiology Department, Continental Medical College and Hayat Memorial Teaching Hospital, Lahore, Pakistan
| | - Shahbaz Hassan Wasti
- Division of Science and Technology, Department of Information Sciences, University of Education, Lahore, Pakistan
| |
Collapse
|
27
|
Overview of Artificial Intelligence in Medicine. Artif Intell Med 2022. [DOI: 10.1007/978-981-19-1223-8_2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
28
|
AlGhamdi M, Abdel-Mottaleb M. DV-DCNN: Dual-view deep convolutional neural network for matching detected masses in mammograms. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 207:106152. [PMID: 34058629 DOI: 10.1016/j.cmpb.2021.106152] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Accepted: 04/30/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Mammography is an X-ray imaging technique used for breast cancer screening. Each breast is usually screened at two different angles generating two views known as mediolateral oblique (MLO) and craniocaudal (CC), which are clinically used by radiologists to detect suspicious masses and diagnose breast cancer. Previous studies applied deep learning models to process each view separately and concatenate the features from the two views to detect and classifying masses. However, direct concatenation is not enough to uncover the relationship between the masses that appear in the two views because they can substantially vary in terms of shape, size, and texture. The relationship between the two views should be established by matching correspondence between their extracted masses. This paper presents a dual-view deep convolutional neural network (DV-DCNN) model for matching masses detected from the two views by establishing correspondence between their extracted patches, which leads to more robust mass detection. METHODS Given a pair of patches as input, the presented model determines whether these patches represent the same mass or not. The network contains two parts: a feature extraction part using tied dense blocks, and a neighborhood patch matching part with three consecutive layers, i.e., a cross-input neighborhood differences layer to find the relationship between the two patches, a patch summary features layer to define a summary of the neighborhood differences and an across-patch features layer to learn a higher-level representation across neighborhood differences. RESULTS To evaluate the model's performance in diverse cases, several experimental scenarios were followed for training and testing using two public datasets, i.e., CBIS-DDSM and INbreast. We also evaluate the contribution of our mass-matching model within a mass detection framework. Experiments show that DV-DCNN outperforms other related deep learning models and demonstrate that the detection results improve when using our model. CONCLUSIONS Matching potential masses between two different views of the same breast leads to more robust mass detection. Experimental results demonstrate the efficacy of a dual-view deep learning model in matching masses, which helps in increasing the accuracy of mass detection and decreasing the false positive rates.
Collapse
Affiliation(s)
- Manal AlGhamdi
- Umm Al-Qura University, Department of Computer Science, Saudi Arabia.
| | | |
Collapse
|
29
|
Lei YM, Yin M, Yu MH, Yu J, Zeng SE, Lv WZ, Li J, Ye HR, Cui XW, Dietrich CF. Artificial Intelligence in Medical Imaging of the Breast. Front Oncol 2021; 11:600557. [PMID: 34367938 PMCID: PMC8339920 DOI: 10.3389/fonc.2021.600557] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Accepted: 07/07/2021] [Indexed: 12/24/2022] Open
Abstract
Artificial intelligence (AI) has invaded our daily lives, and in the last decade, there have been very promising applications of AI in the field of medicine, including medical imaging, in vitro diagnosis, intelligent rehabilitation, and prognosis. Breast cancer is one of the common malignant tumors in women and seriously threatens women’s physical and mental health. Early screening for breast cancer via mammography, ultrasound and magnetic resonance imaging (MRI) can significantly improve the prognosis of patients. AI has shown excellent performance in image recognition tasks and has been widely studied in breast cancer screening. This paper introduces the background of AI and its application in breast medical imaging (mammography, ultrasound and MRI), such as in the identification, segmentation and classification of lesions; breast density assessment; and breast cancer risk assessment. In addition, we also discuss the challenges and future perspectives of the application of AI in medical imaging of the breast.
Collapse
Affiliation(s)
- Yu-Meng Lei
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Miao Yin
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Mei-Hui Yu
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Jing Yu
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Shu-E Zeng
- Department of Medical Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Wen-Zhi Lv
- Department of Artificial Intelligence, Julei Technology, Wuhan, China
| | - Jun Li
- Department of Medical Ultrasound, The First Affiliated Hospital of Medical College, Shihezi University, Xinjiang, China
| | - Hua-Rong Ye
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Christoph F Dietrich
- Department Allgemeine Innere Medizin (DAIM), Kliniken Beau Site, Salem und Permanence, Bern, Switzerland
| |
Collapse
|
30
|
Boniolo F, Dorigatti E, Ohnmacht AJ, Saur D, Schubert B, Menden MP. Artificial intelligence in early drug discovery enabling precision medicine. Expert Opin Drug Discov 2021; 16:991-1007. [PMID: 34075855 DOI: 10.1080/17460441.2021.1918096] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Introduction: Precision medicine is the concept of treating diseases based on environmental factors, lifestyles, and molecular profiles of patients. This approach has been found to increase success rates of clinical trials and accelerate drug approvals. However, current precision medicine applications in early drug discovery use only a handful of molecular biomarkers to make decisions, whilst clinics gear up to capture the full molecular landscape of patients in the near future. This deep multi-omics characterization demands new analysis strategies to identify appropriate treatment regimens, which we envision will be pioneered by artificial intelligence.Areas covered: In this review, the authors discuss the current state of drug discovery in precision medicine and present our vision of how artificial intelligence will impact biomarker discovery and drug design.Expert opinion: Precision medicine is expected to revolutionize modern medicine; however, its traditional form is focusing on a few biomarkers, thus not equipped to leverage the full power of molecular landscapes. For learning how the development of drugs can be tailored to the heterogeneity of patients across their molecular profiles, artificial intelligence algorithms are the next frontier in precision medicine and will enable a fully personalized approach in drug design, and thus ultimately impacting clinical practice.
Collapse
Affiliation(s)
- Fabio Boniolo
- Institute of Computational Biology, Helmholtz Zentrum München - German Research Centre for Environmental Health, Munich, Germany.,School of Medicine, Chair of Translational Cancer Research and Institute for Experimental Cancer Therapy, Klinikum Rechts Der Isar, Technische Universität München, Munich, Germany
| | - Emilio Dorigatti
- Institute of Computational Biology, Helmholtz Zentrum München - German Research Centre for Environmental Health, Munich, Germany.,Statistical Learning and Data Science, Department of Statistics, Ludwig Maximilian Universität München, Munich, Germany
| | - Alexander J Ohnmacht
- Institute of Computational Biology, Helmholtz Zentrum München - German Research Centre for Environmental Health, Munich, Germany.,Department of Biology, Ludwig-Maximilians University Munich, Martinsried, Germany
| | - Dieter Saur
- School of Medicine, Chair of Translational Cancer Research and Institute for Experimental Cancer Therapy, Klinikum Rechts Der Isar, Technische Universität München, Munich, Germany
| | - Benjamin Schubert
- Institute of Computational Biology, Helmholtz Zentrum München - German Research Centre for Environmental Health, Munich, Germany.,Department of Mathematics, Technical University of Munich, Garching, Germany
| | - Michael P Menden
- Institute of Computational Biology, Helmholtz Zentrum München - German Research Centre for Environmental Health, Munich, Germany.,Department of Biology, Ludwig-Maximilians University Munich, Martinsried, Germany.,German Centre for Diabetes Research (DZD e.V.), Neuherberg, Germany
| |
Collapse
|
31
|
Oyelade ON, Ezugwu AE. A deep learning model using data augmentation for detection of architectural distortion in whole and patches of images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102366] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
32
|
Shen Y, Wu N, Phang J, Park J, Liu K, Tyagi S, Heacock L, Kim SG, Moy L, Cho K, Geras KJ. An interpretable classifier for high-resolution breast cancer screening images utilizing weakly supervised localization. Med Image Anal 2021; 68:101908. [PMID: 33383334 PMCID: PMC7828643 DOI: 10.1016/j.media.2020.101908] [Citation(s) in RCA: 71] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 11/12/2020] [Accepted: 11/13/2020] [Indexed: 12/12/2022]
Abstract
Medical images differ from natural images in significantly higher resolutions and smaller regions of interest. Because of these differences, neural network architectures that work well for natural images might not be applicable to medical image analysis. In this work, we propose a novel neural network model to address these unique properties of medical images. This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions. It then applies another higher-capacity network to collect details from chosen regions. Finally, it employs a fusion module that aggregates global and local information to make a prediction. While existing methods often require lesion segmentation during training, our model is trained with only image-level labels and can generate pixel-level saliency maps indicating possible malignant findings. We apply the model to screening mammography interpretation: predicting the presence or absence of benign and malignant lesions. On the NYU Breast Cancer Screening Dataset, our model outperforms (AUC = 0.93) ResNet-34 and Faster R-CNN in classifying breasts with malignant findings. On the CBIS-DDSM dataset, our model achieves performance (AUC = 0.858) on par with state-of-the-art approaches. Compared to ResNet-34, our model is 4.1x faster for inference while using 78.4% less GPU memory. Furthermore, we demonstrate, in a reader study, that our model surpasses radiologist-level AUC by a margin of 0.11.
Collapse
Affiliation(s)
- Yiqiu Shen
- Center for Data Science, New York University, 60 5th Ave, New York, NY 10011, USA
| | - Nan Wu
- Center for Data Science, New York University, 60 5th Ave, New York, NY 10011, USA
| | - Jason Phang
- Center for Data Science, New York University, 60 5th Ave, New York, NY 10011, USA
| | - Jungkyu Park
- Department of Radiology, NYU School of Medicine, 530 1st Ave, New York, NY 10016, USA
| | - Kangning Liu
- Center for Data Science, New York University, 60 5th Ave, New York, NY 10011, USA
| | - Sudarshini Tyagi
- Department of Computer Science, Courant Institute of Mathematical Sciences, New York University, 251 Mercer St, New York, NY 10012, USA
| | - Laura Heacock
- Department of Radiology, NYU School of Medicine, 530 1st Ave, New York, NY 10016, USA; Perlmutter Cancer Center, NYU Langone Health, 160 E 34th St, New York, NY 10016, USA
| | - S Gene Kim
- Department of Radiology, NYU School of Medicine, 530 1st Ave, New York, NY 10016, USA; Center for Advanced Imaging Innovation and Research, NYU Langone Health, 660 1st Ave, New York, NY 10016, USA; Perlmutter Cancer Center, NYU Langone Health, 160 E 34th St, New York, NY 10016, USA
| | - Linda Moy
- Department of Radiology, NYU School of Medicine, 530 1st Ave, New York, NY 10016, USA; Center for Advanced Imaging Innovation and Research, NYU Langone Health, 660 1st Ave, New York, NY 10016, USA; Perlmutter Cancer Center, NYU Langone Health, 160 E 34th St, New York, NY 10016, USA
| | - Kyunghyun Cho
- Center for Data Science, New York University, 60 5th Ave, New York, NY 10011, USA; Department of Computer Science, Courant Institute of Mathematical Sciences, New York University, 251 Mercer St, New York, NY 10012, USA
| | - Krzysztof J Geras
- Center for Data Science, New York University, 60 5th Ave, New York, NY 10011, USA; Department of Radiology, NYU School of Medicine, 530 1st Ave, New York, NY 10016, USA; Center for Advanced Imaging Innovation and Research, NYU Langone Health, 660 1st Ave, New York, NY 10016, USA.
| |
Collapse
|
33
|
Liu W, Cheng Y, Liu Z, Liu C, Cattell R, Xie X, Wang Y, Yang X, Ye W, Liang C, Li J, Gao Y, Huang C, Liang C. Preoperative Prediction of Ki-67 Status in Breast Cancer with Multiparametric MRI Using Transfer Learning. Acad Radiol 2021; 28:e44-e53. [PMID: 32278690 DOI: 10.1016/j.acra.2020.02.006] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 01/31/2020] [Accepted: 02/01/2020] [Indexed: 02/05/2023]
Abstract
RATIONALE AND OBJECTIVES Ki-67 is one of the most important biomarkers of breast cancer traditionally measured invasively via immunohistochemistry. In this study, deep learning based radiomics models were established for preoperative prediction of Ki-67 status using multiparametric magnetic resonance imaging (mp-MRI). MATERIALS AND METHODS Total of 328 eligible patients were retrospectively reviewed [training dataset (n = 230) and a temporal validation dataset (n = 98)]. Deep learning imaging features were extracted from T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), and contrast enhanced T1-weighted imaging (T1+C). Transfer learning techniques constructed four feature sets based on the individual three MR sequences and their combination (i.e., mp-MRI). Multilayer perceptron classifiers were trained for final prediction of Ki-67 status. Mann-Whitney U test compared the predictive performance of individual models. RESULTS The area under curve (AUC) of models based on T2WI,T1+C,DWI and mp-MRI were 0.727, 0.873, 0.674, and 0.888 in the training dataset, respectively, and 0.706, 0.829, 0.643, and 0.875 in the validation dataset, respectively. The predictive performance of mp-MRI classification model in the AUC value was significantly better than that of the individual sequence model (all p< 0.01). CONCLUSION In clinical practice, a noninvasive approach to improve the performance of radiomics in preoperative prediction of Ki-67 status can be provided by extracting breast cancer specific structural and functional features from mp-MRI images obtained from conventional scanning sequences using the advanced deep learning methods. This could further personalize medicine and computer aided diagnosis.
Collapse
Affiliation(s)
- Weixiao Liu
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, No.106, Zhongshan 2nd road, Guangzhou 510080 Guangdong, PR China; Graduate College, Shantou University Medical College, Shantou, Guangdong, PR China
| | - Yulin Cheng
- The School of Computer Science and Engineering, South China University of Technology, Guangzhou, Guangdong, PR China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, No.106, Zhongshan 2nd road, Guangzhou 510080 Guangdong, PR China
| | - Chunling Liu
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, No.106, Zhongshan 2nd road, Guangzhou 510080 Guangdong, PR China
| | - Renee Cattell
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, New York
| | - Xinyan Xie
- The School of Computer Science and Engineering, South China University of Technology, Guangzhou, Guangdong, PR China
| | - Yingyi Wang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, No.106, Zhongshan 2nd road, Guangzhou 510080 Guangdong, PR China
| | - Xiaojun Yang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, No.106, Zhongshan 2nd road, Guangzhou 510080 Guangdong, PR China
| | - Weitao Ye
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, No.106, Zhongshan 2nd road, Guangzhou 510080 Guangdong, PR China
| | - Cuishan Liang
- Department of Radiology, Foshan Fetal Medicine Institute, Foshan Maternity and Children's Healthcare Hospital Affiliated to Southern Medical University, Foshan Guangdong, PR China
| | - Jiao Li
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, No.106, Zhongshan 2nd road, Guangzhou 510080 Guangdong, PR China
| | - Ying Gao
- The School of Computer Science and Engineering, South China University of Technology, Guangzhou, Guangdong, PR China
| | - Chuan Huang
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, New York; Department of Radiology, Stony Brook Medicine, Stony Brook, New York; Department of Psychiatry, Stony Brook Medicine, Stony Brook, New York
| | - Changhong Liang
- Department of Radiology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, No.106, Zhongshan 2nd road, Guangzhou 510080 Guangdong, PR China.
| |
Collapse
|
34
|
Ou WC, Polat D, Dogan BE. Deep learning in breast radiology: current progress and future directions. Eur Radiol 2021; 31:4872-4885. [PMID: 33449174 DOI: 10.1007/s00330-020-07640-9] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 10/30/2020] [Accepted: 12/17/2020] [Indexed: 12/13/2022]
Abstract
This review provides an overview of current applications of deep learning methods within breast radiology. The diagnostic capabilities of deep learning in breast radiology continue to improve, giving rise to the prospect that these methods may be integrated not only into detection and classification of breast lesions, but also into areas such as risk estimation and prediction of tumor responses to therapy. Remaining challenges include limited availability of high-quality data with expert annotations and ground truth determinations, the need for further validation of initial results, and unresolved medicolegal considerations. KEY POINTS: • Deep learning (DL) continues to push the boundaries of what can be accomplished by artificial intelligence (AI) in breast imaging with distinct advantages over conventional computer-aided detection. • DL-based AI has the potential to augment the capabilities of breast radiologists by improving diagnostic accuracy, increasing efficiency, and supporting clinical decision-making through prediction of prognosis and therapeutic response. • Remaining challenges to DL implementation include a paucity of prospective data on DL utilization and yet unresolved medicolegal questions regarding increasing AI utilization.
Collapse
Affiliation(s)
- William C Ou
- Department of Radiology, Seay Biomedical Building, University of Texas Southwestern Medical Center, 2201 Inwood Road, Dallas, TX, 75390, USA.
| | - Dogan Polat
- Department of Radiology, Seay Biomedical Building, University of Texas Southwestern Medical Center, 2201 Inwood Road, Dallas, TX, 75390, USA
| | - Basak E Dogan
- Department of Radiology, Seay Biomedical Building, University of Texas Southwestern Medical Center, 2201 Inwood Road, Dallas, TX, 75390, USA
| |
Collapse
|
35
|
Suh YJ, Jung J, Cho BJ. Automated Breast Cancer Detection in Digital Mammograms of Various Densities via Deep Learning. J Pers Med 2020; 10:jpm10040211. [PMID: 33172076 PMCID: PMC7711783 DOI: 10.3390/jpm10040211] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 11/03/2020] [Accepted: 11/04/2020] [Indexed: 01/11/2023] Open
Abstract
Mammography plays an important role in screening breast cancer among females, and artificial intelligence has enabled the automated detection of diseases on medical images. This study aimed to develop a deep learning model detecting breast cancer in digital mammograms of various densities and to evaluate the model performance compared to previous studies. From 1501 subjects who underwent digital mammography between February 2007 and May 2015, craniocaudal and mediolateral view mammograms were included and concatenated for each breast, ultimately producing 3002 merged images. Two convolutional neural networks were trained to detect any malignant lesion on the merged images. The performances were tested using 301 merged images from 284 subjects and compared to a meta-analysis including 12 previous deep learning studies. The mean area under the receiver-operating characteristic curve (AUC) for detecting breast cancer in each merged mammogram was 0.952 ± 0.005 by DenseNet-169 and 0.954 ± 0.020 by EfficientNet-B5, respectively. The performance for malignancy detection decreased as breast density increased (density A, mean AUC = 0.984 vs. density D, mean AUC = 0.902 by DenseNet-169). When patients’ age was used as a covariate for malignancy detection, the performance showed little change (mean AUC, 0.953 ± 0.005). The mean sensitivity and specificity of the DenseNet-169 (87 and 88%, respectively) surpassed the mean values (81 and 82%, respectively) obtained in a meta-analysis. Deep learning would work efficiently in screening breast cancer in digital mammograms of various densities, which could be maximized in breasts with lower parenchyma density.
Collapse
Affiliation(s)
- Yong Joon Suh
- Department of Breast and Endocrine Surgery, Hallym University Sacred Heart Hospital, Anyang 14068, Korea;
| | - Jaewon Jung
- Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang 14068, Korea;
| | - Bum-Joo Cho
- Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang 14068, Korea;
- Department of Ophthalmology, Hallym University Sacred Heart Hospital, Anyang 14068, Korea
- Correspondence: ; Tel.: +82-31-380-3835; Fax: +82-31-380-3837
| |
Collapse
|
36
|
Pelka O, Friedrich CM, Nensa F, Mönninghoff C, Bloch L, Jöckel KH, Schramm S, Sanchez Hoffmann S, Winkler A, Weimar C, Jokisch M. Sociodemographic data and APOE-ε4 augmentation for MRI-based detection of amnestic mild cognitive impairment using deep learning systems. PLoS One 2020; 15:e0236868. [PMID: 32976486 PMCID: PMC7518632 DOI: 10.1371/journal.pone.0236868] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Accepted: 07/16/2020] [Indexed: 12/20/2022] Open
Abstract
Detection and diagnosis of early and subclinical stages of Alzheimer's Disease (AD) play an essential role in the implementation of intervention and prevention strategies. Neuroimaging techniques predominantly provide insight into anatomic structure changes associated with AD. Deep learning methods have been extensively applied towards creating and evaluating models capable of differentiating between cognitively unimpaired, patients with Mild Cognitive Impairment (MCI) and AD dementia. Several published approaches apply information fusion techniques, providing ways of combining several input sources in the medical domain, which contributes to knowledge of broader and enriched quality. The aim of this paper is to fuse sociodemographic data such as age, marital status, education and gender, and genetic data (presence of an apolipoprotein E (APOE)-ε4 allele) with Magnetic Resonance Imaging (MRI) scans. This enables enriched multi-modal features, that adequately represent the MRI scan visually and is adopted for creating and modeling classification systems capable of detecting amnestic MCI (aMCI). To fully utilize the potential of deep convolutional neural networks, two extra color layers denoting contrast intensified and blurred image adaptations are virtually augmented to each MRI scan, completing the Red-Green-Blue (RGB) color channels. Deep convolutional activation features (DeCAF) are extracted from the average pooling layer of the deep learning system Inception_v3. These features from the fused MRI scans are used as visual representation for the Long Short-Term Memory (LSTM) based Recurrent Neural Network (RNN) classification model. The proposed approach is evaluated on a sub-study containing 120 participants (aMCI = 61 and cognitively unimpaired = 59) of the Heinz Nixdorf Recall (HNR) Study with a baseline model accuracy of 76%. Further evaluation was conducted on the ADNI Phase 1 dataset with 624 participants (aMCI = 397 and cognitively unimpaired = 227) with a baseline model accuracy of 66.27%. Experimental results show that the proposed approach achieves 90% accuracy and 0.90 F1-Score at classification of aMCI vs. cognitively unimpaired participants on the HNR Study dataset, and 77% accuracy and 0.83 F1-Score on the ADNI dataset.
Collapse
Affiliation(s)
- Obioma Pelka
- Department of Computer Science, University of Applied Sciences and Arts Dortmund (FHDO), Dortmund, NRW, Germany
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, University of Duisburg-Essen, Essen, NRW, Germany
| | - Christoph M. Friedrich
- Department of Computer Science, University of Applied Sciences and Arts Dortmund (FHDO), Dortmund, NRW, Germany
- Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), University Hospital Essen, University of Duisburg-Essen, Essen, NRW, Germany
| | - Felix Nensa
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, University of Duisburg-Essen, Essen, NRW, Germany
| | | | - Louise Bloch
- Department of Computer Science, University of Applied Sciences and Arts Dortmund (FHDO), Dortmund, NRW, Germany
- Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), University Hospital Essen, University of Duisburg-Essen, Essen, NRW, Germany
| | - Karl-Heinz Jöckel
- Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), University Hospital Essen, University of Duisburg-Essen, Essen, NRW, Germany
| | - Sara Schramm
- Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), University Hospital Essen, University of Duisburg-Essen, Essen, NRW, Germany
| | - Sarah Sanchez Hoffmann
- Department of Neurology, University Hospital of Essen, University of Duisburg-Essen, Essen, NRW, Germany
| | - Angela Winkler
- Department of Neurology, University Hospital of Essen, University of Duisburg-Essen, Essen, NRW, Germany
| | - Christian Weimar
- Department of Neurology, University Hospital of Essen, University of Duisburg-Essen, Essen, NRW, Germany
| | - Martha Jokisch
- Department of Neurology, University Hospital of Essen, University of Duisburg-Essen, Essen, NRW, Germany
| | | |
Collapse
|
37
|
Bruno A, Ardizzone E, Vitabile S, Midiri M. A Novel Solution Based on Scale Invariant Feature Transform Descriptors and Deep Learning for the Detection of Suspicious Regions in Mammogram Images. JOURNAL OF MEDICAL SIGNALS & SENSORS 2020; 10:158-173. [PMID: 33062608 PMCID: PMC7528986 DOI: 10.4103/jmss.jmss_31_19] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2019] [Revised: 10/01/2019] [Accepted: 05/06/2020] [Indexed: 12/24/2022]
Abstract
BACKGROUND Deep learning methods have become popular for their high-performance rate in the classification and detection of events in computer vision tasks. Transfer learning paradigm is widely adopted to apply pretrained convolutional neural network (CNN) on medical domains overcoming the problem of the scarcity of public datasets. Some investigations to assess transfer learning knowledge inference abilities in the context of mammogram screening and possible combinations with unsupervised techniques are in progress. METHODS We propose a novel technique for the detection of suspicious regions in mammograms that consist of the combination of two approaches based on scale invariant feature transform (SIFT) keypoints and transfer learning with pretrained CNNs such as PyramidNet and AlexNet fine-tuned on digital mammograms generated by different mammography devices. Preprocessing, feature extraction, and selection steps characterize the SIFT-based method, while the deep learning network validates the candidate suspicious regions detected by the SIFT method. RESULTS The experiments conducted on both mini-MIAS dataset and our new public dataset Suspicious Region Detection on Mammogram from PP (SuReMaPP) of 384 digital mammograms exhibit high performances compared to several state-of-the-art methods. Our solution reaches 98% of sensitivity and 90% of specificity on SuReMaPP and 94% of sensitivity and 91% of specificity on mini-MIAS. CONCLUSIONS The experimental sessions conducted so far prompt us to further investigate the powerfulness of transfer learning over different CNNs and possible combinations with unsupervised techniques. Transfer learning performances' accuracy may decrease when the training and testing images come out from mammography devices with different properties.
Collapse
Affiliation(s)
- Alessandro Bruno
- Faculty of Media and Communication, Department - NCCA (National Centre for Computer Animation) at Bournemouth University, Poole, Dorset, United Kingdom
| | | | - Salvatore Vitabile
- Department of Biomedicine, Neuroscience and Advanced Diagnostic at Palermo University, Palermo, Italy
| | - Massimo Midiri
- Department of Biomedicine, Neuroscience and Advanced Diagnostic at Palermo University, Palermo, Italy
| |
Collapse
|
38
|
Koitka S, Kim MS, Qu M, Fischer A, Friedrich CM, Nensa F. Mimicking the radiologists' workflow: Estimating pediatric hand bone age with stacked deep neural networks. Med Image Anal 2020; 64:101743. [PMID: 32540698 DOI: 10.1016/j.media.2020.101743] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Revised: 05/27/2020] [Accepted: 05/28/2020] [Indexed: 11/26/2022]
Abstract
Pediatric endocrinologists regularly order radiographs of the left hand to estimate the degree of bone maturation in order to assess their patients for advanced or delayed growth, physical development, and to monitor consecutive therapeutic measures. The reading of such images is a labor-intensive task that requires a lot of experience and is normally performed by highly trained experts like pediatric radiologists. In this paper we build an automated system for pediatric bone age estimation that mimics and accelerates the workflow of the radiologist without breaking it. The complete system is based on two neural network based models: on the one hand a detector network, which identifies the ossification areas, on the other hand gender and region specific regression networks, which estimate the bone age from the detected areas. With a small annotated dataset an ossification area detection network can be trained, which is stable enough to work as part of a multi-stage approach. Furthermore, our system achieves competitive results on the RSNA Pediatric Bone Age Challenge test set with an average error of 4.56 months. In contrast to other approaches, especially purely encoder-based architectures, our two-stage approach provides self-explanatory results. By detecting and evaluating the individual ossification areas, thus simulating the workflow of the Tanner-Whitehouse procedure, the results are interpretable for a radiologist.
Collapse
Affiliation(s)
- Sven Koitka
- University Hospital Essen, Institute of Diagnostic and Interventional Radiology and Neuroradiology, Hufelandstr. 55, Essen 45147, Germany.
| | - Moon S Kim
- University Hospital Essen, Institute of Diagnostic and Interventional Radiology and Neuroradiology, Hufelandstr. 55, Essen 45147, Germany
| | - Ming Qu
- University of Bonn, Department of Computer Science, Endenicher Allee 19A, Bonn 53115, Germany
| | - Asja Fischer
- Ruhr University Bochum, Department of Mathematics, Universitätsstr. 150, Bochum 44801, Germany
| | - Christoph M Friedrich
- University of Applied Sciences and Arts Dortmund, Department of Computer Science, Emil-Figge-Str. 42, Dortmund 44227, Germany; University Hospital Essen, Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), Hufelandstr. 55, Essen 45147, Germany
| | - Felix Nensa
- University Hospital Essen, Institute of Diagnostic and Interventional Radiology and Neuroradiology, Hufelandstr. 55, Essen 45147, Germany
| |
Collapse
|
39
|
Wong DJ, Gandomkar Z, Wu W, Zhang G, Gao W, He X, Wang Y, Reed W. Artificial intelligence and convolution neural networks assessing mammographic images: a narrative literature review. J Med Radiat Sci 2020; 67:134-142. [PMID: 32134206 PMCID: PMC7276180 DOI: 10.1002/jmrs.385] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 01/18/2020] [Accepted: 02/11/2020] [Indexed: 11/06/2022] Open
Abstract
Studies have shown that the use of artificial intelligence can reduce errors in medical image assessment. The diagnosis of breast cancer is an essential task; however, diagnosis can include 'detection' and 'interpretation' errors. Studies to reduce these errors have shown the feasibility of using convolution neural networks (CNNs). This narrative review presents recent studies in diagnosing mammographic malignancy investigating the accuracy and reliability of these CNNs. Databases including ScienceDirect, PubMed, MEDLINE, British Medical Journal and Medscape were searched using the terms 'convolutional neural network or artificial intelligence', 'breast neoplasms [MeSH] or breast cancer or breast carcinoma' and 'mammography [MeSH Terms]'. Articles collected were screened under the inclusion and exclusion criteria, accounting for the publication date and exclusive use of mammography images, and included only literature in English. After extracting data, results were compared and discussed. This review included 33 studies and identified four recurring categories of studies: the differentiation of benign and malignant masses, the localisation of masses, cancer-containing and cancer-free breast tissue differentiation and breast classification based on breast density. CNN's application in detecting malignancy in mammography appears promising but requires further standardised investigations before potentially becoming an integral part of the diagnostic routine in mammography.
Collapse
Affiliation(s)
- Dennis Jay Wong
- Discipline of Medical Imaging SciencesThe University of SydneyLidcombeNew South WalesAustralia
| | - Ziba Gandomkar
- Discipline of Medical Imaging SciencesThe University of SydneyLidcombeNew South WalesAustralia
| | - Wan‐Jing Wu
- Discipline of Medical Imaging SciencesThe University of SydneyLidcombeNew South WalesAustralia
| | - Guijing Zhang
- Discipline of Medical Imaging SciencesThe University of SydneyLidcombeNew South WalesAustralia
| | - Wushuang Gao
- Discipline of Medical Imaging SciencesThe University of SydneyLidcombeNew South WalesAustralia
| | - Xiaoying He
- Discipline of Medical Imaging SciencesThe University of SydneyLidcombeNew South WalesAustralia
| | - Yunuo Wang
- Discipline of Medical Imaging SciencesThe University of SydneyLidcombeNew South WalesAustralia
| | - Warren Reed
- Discipline of Medical Imaging SciencesThe University of SydneyLidcombeNew South WalesAustralia
| |
Collapse
|
40
|
Rawat RR, Ortega I, Roy P, Sha F, Shibata D, Ruderman D, Agus DB. Deep learned tissue "fingerprints" classify breast cancers by ER/PR/Her2 status from H&E images. Sci Rep 2020; 10:7275. [PMID: 32350370 PMCID: PMC7190637 DOI: 10.1038/s41598-020-64156-4] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Accepted: 04/13/2020] [Indexed: 12/17/2022] Open
Abstract
Because histologic types are subjective and difficult to reproduce between pathologists, tissue morphology often takes a back seat to molecular testing for the selection of breast cancer treatments. This work explores whether a deep-learning algorithm can learn objective histologic H&E features that predict the clinical subtypes of breast cancer, as assessed by immunostaining for estrogen, progesterone, and Her2 receptors (ER/PR/Her2). Translating deep learning to this and related problems in histopathology presents a challenge due to the lack of large, well-annotated data sets, which are typically required for the algorithms to learn statistically significant discriminatory patterns. To overcome this limitation, we introduce the concept of “tissue fingerprints,” which leverages large, unannotated datasets in a label-free manner to learn H&E features that can distinguish one patient from another. The hypothesis is that training the algorithm to learn the morphological differences between patients will implicitly teach it about the biologic variation between them. Following this training internship, we used the features the network learned, which we call “fingerprints,” to predict ER, PR, and Her2 status in two datasets. Despite the discovery dataset being relatively small by the standards of the machine learning community (n = 939), fingerprints enabled the determination of ER, PR, and Her2 status from whole slide H&E images with 0.89 AUC (ER), 0.81 AUC (PR), and 0.79 AUC (Her2) on a large, independent test set (n = 2531). Tissue fingerprints are concise but meaningful histopathologic image representations that capture biological information and may enable machine learning algorithms that go beyond the traditional ER/PR/Her2 clinical groupings by directly predicting theragnosis.
Collapse
Affiliation(s)
- Rishi R Rawat
- Lawrence J. Ellison Institute for Transformative Medicine, University of Southern California, 12414 Exposition Blvd, Los Angeles, CA, 90064, USA
| | - Itzel Ortega
- Lawrence J. Ellison Institute for Transformative Medicine, University of Southern California, 12414 Exposition Blvd, Los Angeles, CA, 90064, USA
| | - Preeyam Roy
- Lawrence J. Ellison Institute for Transformative Medicine, University of Southern California, 12414 Exposition Blvd, Los Angeles, CA, 90064, USA
| | - Fei Sha
- DASH Center at USC, 1002 Childs Way, MCB 114, Los Angeles, CA, 90089-0005, USA
| | - Darryl Shibata
- Department of Pathology, University of Southern California Health Sciences Campus, NOR 1441 Eastlake Ave, Los Angeles, 90033, USA
| | - Daniel Ruderman
- Lawrence J. Ellison Institute for Transformative Medicine, University of Southern California, 12414 Exposition Blvd, Los Angeles, CA, 90064, USA.
| | - David B Agus
- Lawrence J. Ellison Institute for Transformative Medicine, University of Southern California, 12414 Exposition Blvd, Los Angeles, CA, 90064, USA
| |
Collapse
|
41
|
Computer-Aided Diagnosis of Skin Diseases Using Deep Neural Networks. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10072488] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Propensity of skin diseases to manifest in a variety of forms, lack and maldistribution of qualified dermatologists, and exigency of timely and accurate diagnosis call for automated Computer-Aided Diagnosis (CAD). This study aims at extending previous works on CAD for dermatology by exploring the potential of Deep Learning to classify hundreds of skin diseases, improving classification performance, and utilizing disease taxonomy. We trained state-of-the-art Deep Neural Networks on two of the largest publicly available skin image datasets, namely DermNet and ISIC Archive, and also leveraged disease taxonomy, where available, to improve classification performance of these models. On DermNet we establish new state-of-the-art with 80% accuracy and 98% Area Under the Curve (AUC) for classification of 23 diseases. We also set precedence for classifying all 622 unique sub-classes in this dataset and achieved 67% accuracy and 98% AUC. On ISIC Archive we classified all 7 diseases with 93% average accuracy and 99% AUC. This study shows that Deep Learning has great potential to classify a vast array of skin diseases with near-human accuracy and far better reproducibility. It can have a promising role in practical real-time skin disease diagnosis by assisting physicians in large-scale screening using clinical or dermoscopic images.
Collapse
|
42
|
Wu N, Phang J, Park J, Shen Y, Huang Z, Zorin M, Jastrzebski S, Fevry T, Katsnelson J, Kim E, Wolfson S, Parikh U, Gaddam S, Lin LLY, Ho K, Weinstein JD, Reig B, Gao Y, Toth H, Pysarenko K, Lewin A, Lee J, Airola K, Mema E, Chung S, Hwang E, Samreen N, Kim SG, Heacock L, Moy L, Cho K, Geras KJ. Deep Neural Networks Improve Radiologists' Performance in Breast Cancer Screening. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1184-1194. [PMID: 31603772 PMCID: PMC7427471 DOI: 10.1109/tmi.2019.2945514] [Citation(s) in RCA: 239] [Impact Index Per Article: 47.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
We present a deep convolutional neural network for breast cancer screening exam classification, trained, and evaluated on over 200000 exams (over 1000000 images). Our network achieves an AUC of 0.895 in predicting the presence of cancer in the breast, when tested on the screening population. We attribute the high accuracy to a few technical advances. 1) Our network's novel two-stage architecture and training procedure, which allows us to use a high-capacity patch-level network to learn from pixel-level labels alongside a network learning from macroscopic breast-level labels. 2) A custom ResNet-based network used as a building block of our model, whose balance of depth and width is optimized for high-resolution medical images. 3) Pretraining the network on screening BI-RADS classification, a related task with more noisy labels. 4) Combining multiple input views in an optimal way among a number of possible choices. To validate our model, we conducted a reader study with 14 readers, each reading 720 screening mammogram exams, and show that our model is as accurate as experienced radiologists when presented with the same data. We also show that a hybrid model, averaging the probability of malignancy predicted by a radiologist with a prediction of our neural network, is more accurate than either of the two separately. To further understand our results, we conduct a thorough analysis of our network's performance on different subpopulations of the screening population, the model's design, training procedure, errors, and properties of its internal representations. Our best models are publicly available at https://github.com/nyukat/breast_cancer_classifier.
Collapse
|
43
|
Abstract
Artificial intelligence (AI) has contributed substantially to the resolution of a variety of biomedical problems, including cancer, over the past decade. Deep learning, a subfield of AI that is highly flexible and supports automatic feature extraction, is increasingly being applied in various areas of both basic and clinical cancer research. In this review, we describe numerous recent examples of the application of AI in oncology, including cases in which deep learning has efficiently solved problems that were previously thought to be unsolvable, and we address obstacles that must be overcome before such application can become more widespread. We also highlight resources and datasets that can help harness the power of AI for cancer research. The development of innovative approaches to and applications of AI will yield important insights in oncology in the coming decade.
Collapse
Affiliation(s)
- Hideyuki Shimizu
- Department of Molecular and Cellular BiologyMedical Institute of BioregulationKyushu UniversityFukuokaJapan
| | - Keiichi I. Nakayama
- Department of Molecular and Cellular BiologyMedical Institute of BioregulationKyushu UniversityFukuokaJapan
| |
Collapse
|
44
|
Murali N, Kucukkaya A, Petukhova A, Onofrey J, Chapiro J. Supervised Machine Learning in Oncology: A Clinician's Guide. ACTA ACUST UNITED AC 2020; 4:73-81. [PMID: 32869010 DOI: 10.1055/s-0040-1705097] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
The widespread adoption of electronic health records has resulted in an abundance of imaging and clinical information. New data-processing technologies have the potential to revolutionize the practice of medicine by deriving clinically meaningful insights from large-volume data. Among those techniques is supervised machine learning, the study of computer algorithms that use self-improving models that learn from labeled data to solve problems. One clinical area of application for supervised machine learning is within oncology, where machine learning has been used for cancer diagnosis, staging, and prognostication. This review describes a framework to aid clinicians in understanding and critically evaluating studies applying supervised machine learning methods. Additionally, we describe current studies applying supervised machine learning techniques to the diagnosis, prognostication, and treatment of cancer, with a focus on gastroenterological cancers and other related pathologies.
Collapse
Affiliation(s)
- Nikitha Murali
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut
| | - Ahmet Kucukkaya
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut
| | - Alexandra Petukhova
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut
| | - John Onofrey
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut.,Department of Urology, Yale University School of Medicine, New Haven, Connecticut
| | - Julius Chapiro
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut
| |
Collapse
|
45
|
Kyono T, Gilbert FJ, van der Schaar M. Improving Workflow Efficiency for Mammography Using Machine Learning. J Am Coll Radiol 2020; 17:56-63. [PMID: 31153798 DOI: 10.1016/j.jacr.2019.05.012] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Revised: 05/13/2019] [Accepted: 05/14/2019] [Indexed: 01/03/2023]
Abstract
OBJECTIVE The aim of this study was to determine whether machine learning could reduce the number of mammograms the radiologist must read by using a machine-learning classifier to correctly identify normal mammograms and to select the uncertain and abnormal examinations for radiological interpretation. METHODS Mammograms in a research data set from over 7,000 women who were recalled for assessment at six UK National Health Service Breast Screening Program centers were used. A convolutional neural network in conjunction with multitask learning was used to extract imaging features from mammograms that mimic the radiological assessment provided by a radiologist, the patient's nonimaging features, and pathology outcomes. A deep neural network was then used to concatenate and fuse multiple mammogram views to predict both a diagnosis and a recommendation of whether or not additional radiological assessment was needed. RESULTS Ten-fold cross-validation was used on 2,000 randomly selected patients from the data set; the remainder of the data set was used for convolutional neural network training. While maintaining an acceptable negative predictive value of 0.99, the proposed model was able to identify 34% (95% confidence interval, 25%-43%) and 91% (95% confidence interval: 88%-94%) of the negative mammograms for test sets with a cancer prevalence of 15% and 1%, respectively. CONCLUSION Machine learning was leveraged to successfully reduce the number of normal mammograms that radiologists need to read without degrading diagnostic accuracy.
Collapse
Affiliation(s)
- Trent Kyono
- Department of Computer Science, University of California Los Angeles, Los Angeles, California.
| | - Fiona J Gilbert
- Department of Radiology, University of Cambridge School of Clinical Medicine, Cambridge, United Kingdom; NIHR Cambridge Biomedical Research Center, Cambridge, United Kingdom
| | - Mihaela van der Schaar
- Department of Computer Science, University of California Los Angeles, Los Angeles, California
| |
Collapse
|
46
|
New Frontiers: An Update on Computer-Aided Diagnosis for Breast Imaging in the Age of Artificial Intelligence. AJR Am J Roentgenol 2019; 212:300-307. [PMID: 30667309 DOI: 10.2214/ajr.18.20392] [Citation(s) in RCA: 71] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
OBJECTIVE The purpose of this article is to compare traditional versus machine learning-based computer-aided detection (CAD) platforms in breast imaging with a focus on mammography, to underscore limitations of traditional CAD, and to highlight potential solutions in new CAD systems under development for the future. CONCLUSION CAD development for breast imaging is undergoing a paradigm shift based on vast improvement of computing power and rapid emergence of advanced deep learning algorithms, heralding new systems that may hold real potential to improve clinical care.
Collapse
|
47
|
Rubin M, Stein O, Turko NA, Nygate Y, Roitshtain D, Karako L, Barnea I, Giryes R, Shaked NT. TOP-GAN: Stain-free cancer cell classification using deep learning with a small training set. Med Image Anal 2019; 57:176-185. [DOI: 10.1016/j.media.2019.06.014] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2018] [Revised: 05/18/2019] [Accepted: 06/25/2019] [Indexed: 01/01/2023]
|
48
|
Abdelhafiz D, Yang C, Ammar R, Nabavi S. Deep convolutional neural networks for mammography: advances, challenges and applications. BMC Bioinformatics 2019; 20:281. [PMID: 31167642 PMCID: PMC6551243 DOI: 10.1186/s12859-019-2823-4] [Citation(s) in RCA: 73] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND The limitations of traditional computer-aided detection (CAD) systems for mammography, the extreme importance of early detection of breast cancer and the high impact of the false diagnosis of patients drive researchers to investigate deep learning (DL) methods for mammograms (MGs). Recent breakthroughs in DL, in particular, convolutional neural networks (CNNs) have achieved remarkable advances in the medical fields. Specifically, CNNs are used in mammography for lesion localization and detection, risk assessment, image retrieval, and classification tasks. CNNs also help radiologists providing more accurate diagnosis by delivering precise quantitative analysis of suspicious lesions. RESULTS In this survey, we conducted a detailed review of the strengths, limitations, and performance of the most recent CNNs applications in analyzing MG images. It summarizes 83 research studies for applying CNNs on various tasks in mammography. It focuses on finding the best practices used in these research studies to improve the diagnosis accuracy. This survey also provides a deep insight into the architecture of CNNs used for various tasks. Furthermore, it describes the most common publicly available MG repositories and highlights their main features and strengths. CONCLUSIONS The mammography research community can utilize this survey as a basis for their current and future studies. The given comparison among common publicly available MG repositories guides the community to select the most appropriate database for their application(s). Moreover, this survey lists the best practices that improve the performance of CNNs including the pre-processing of images and the use of multi-view images. In addition, other listed techniques like transfer learning (TL), data augmentation, batch normalization, and dropout are appealing solutions to reduce overfitting and increase the generalization of the CNN models. Finally, this survey identifies the research challenges and directions that require further investigations by the community.
Collapse
Affiliation(s)
- Dina Abdelhafiz
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
- The Informatics Research Institute (IRI), City of Scientific Research and Technological Application (SRTA-City), New Borg El-Arab, Egypt
| | - Clifford Yang
- Department of Diagnostic Imaging, University of Connecticut Health Center, Farmington, 06030 CT USA
| | - Reda Ammar
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
| |
Collapse
|
49
|
Jang HJ, Cho KO. Applications of deep learning for the analysis of medical data. Arch Pharm Res 2019; 42:492-504. [PMID: 31140082 DOI: 10.1007/s12272-019-01162-9] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Accepted: 05/20/2019] [Indexed: 02/06/2023]
Abstract
Over the past decade, deep learning has demonstrated superior performances in solving many problems in various fields of medicine compared with other machine learning methods. To understand how deep learning has surpassed traditional machine learning techniques, in this review, we briefly explore the basic learning algorithms underlying deep learning. In addition, the procedures for building deep learning-based classifiers for seizure electroencephalograms and gastric tissue slides are described as examples to demonstrate the simplicity and effectiveness of deep learning applications. Finally, we review the clinical applications of deep learning in radiology, pathology, and drug discovery, where deep learning has been actively adopted. Considering the great advantages of deep learning techniques, deep learning will be increasingly and widely utilized in a wide variety of different areas in medicine in the coming decades.
Collapse
Affiliation(s)
- Hyun-Jong Jang
- Department of Physiology, Department of Biomedicine & Health Sciences, Catholic Neuroscience Institute, College of Medicine, The Catholic University of Korea, Seoul, 06591, South Korea
| | - Kyung-Ok Cho
- Department of Pharmacology, Department of Biomedicine & Health Sciences, Catholic Neuroscience Institute, Institute of Aging and Metabolic Diseases, College of Medicine, The Catholic University of Korea, 222 Banpo-Daero, Seocho-Gu, Seoul, 06591, South Korea.
| |
Collapse
|
50
|
Houssami N, Kirkpatrick-Jones G, Noguchi N, Lee CI. Artificial Intelligence (AI) for the early detection of breast cancer: a scoping review to assess AI's potential in breast screening practice. Expert Rev Med Devices 2019; 16:351-362. [PMID: 30999781 DOI: 10.1080/17434440.2019.1610387] [Citation(s) in RCA: 71] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2019] [Accepted: 04/18/2019] [Indexed: 12/11/2022]
Abstract
INTRODUCTION Various factors are driving interest in the application of artificial intelligence (AI) for breast cancer (BC) detection, but it is unclear whether the evidence warrants large-scale use in population-based screening. AREAS COVERED We performed a scoping review, a structured evidence synthesis describing a broad research field, to summarize knowledge on AI evaluated for BC detection and to assess AI's readiness for adoption in BC screening. Studies were predominantly small retrospective studies based on highly selected image datasets that contained a high proportion of cancers (median BC proportion in datasets 26.5%), and used heterogeneous techniques to develop AI models; the range of estimated AUC (area under ROC curve) for AI models was 69.2-97.8% (median AUC 88.2%). We identified various methodologic limitations including use of non-representative imaging data for model training, limited validation in external datasets, potential bias in training data, and few comparative data for AI versus radiologists' interpretation of mammography screening. EXPERT OPINION Although contemporary AI models have reported generally good accuracy for BC detection, methodological concerns, and evidence gaps exist that limit translation into clinical BC screening settings. These should be addressed in parallel to advancing AI techniques to render AI transferable to large-scale population-based screening.
Collapse
Affiliation(s)
- Nehmat Houssami
- a The University of Sydney, Faculty of Medicine and Health , Sydney School of Public Health (A27) , Sydney , Australia
| | - Georgia Kirkpatrick-Jones
- a The University of Sydney, Faculty of Medicine and Health , Sydney School of Public Health (A27) , Sydney , Australia
| | - Naomi Noguchi
- a The University of Sydney, Faculty of Medicine and Health , Sydney School of Public Health (A27) , Sydney , Australia
| | - Christoph I Lee
- b Department of Radiology , University of Washington School of Medicine , Seattle , WA , USA
- c Department of Health Services , University of Washington School of Public Health , Seattle , WA , USA
- d Hutchinson Institute for Cancer Outcomes Research , Seattle , WA , USA
| |
Collapse
|