1
|
Chen J, Chen R, Qiu J, Yin J, Zhang L. [Identifying Novel Coronavirus Pneumonia With CT Images: A Deep Learning Approach With Detail Upsampling and Attention Guidance]. SICHUAN DA XUE XUE BAO. YI XUE BAN = JOURNAL OF SICHUAN UNIVERSITY. MEDICAL SCIENCE EDITION 2024; 55:455-460. [PMID: 38645853 PMCID: PMC11026874 DOI: 10.12182/20240360605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Indexed: 04/23/2024]
Abstract
Objective To construct a deep learning-based target detection method to help radiologists perform rapid diagnosis of lesions in the CT images of patients with novel coronavirus pneumonia (NCP) by restoring detailed information and mining local information. Methods We present a deep learning approach that integrates detail upsampling and attention guidance. A linear upsampling algorithm based on bicubic interpolation algorithm was adopted to improve the restoration of detailed information within feature maps during the upsampling phase. Additionally, a visual attention mechanism based on vertical and horizontal spatial dimensions embedded in the feature extraction module to enhance the capability of the object detection algorithm to represent key information related to NCP lesions. Results Experimental results on the NCP dataset showed that the detection method based on the detail upsampling algorithm improved the recall rate by 1.07% compared with the baseline model, with the AP50 reaching 85.14%. After embedding the attention mechanism in the feature extraction module, 86.13% AP50, 73.92% recall, and 90.37% accuracy were achieved, which were better than those of the popular object detection models. Conclusion The feature information mining of CT images based on deep learning can further improve the lesion detection ability. The proposed approach helps radiologists rapidly identify NCP lesions on CT images and provides an important clinical basis for early intervention and high-intensity monitoring of NCP patients.
Collapse
Affiliation(s)
- Junren Chen
- ( 610065) School of Computer Science, Sichuan University, Chengdu 610065, China
- / ( 610041) West China Biomedical Big Data Center, West China Hospital/West China School of Medicine, Sichuan University, Chengdu 610041, China
| | - Rui Chen
- ( 610065) School of Computer Science, Sichuan University, Chengdu 610065, China
| | - Jiajun Qiu
- ( 610065) School of Computer Science, Sichuan University, Chengdu 610065, China
| | - Jin Yin
- ( 610065) School of Computer Science, Sichuan University, Chengdu 610065, China
| | - Lei Zhang
- ( 610065) School of Computer Science, Sichuan University, Chengdu 610065, China
| |
Collapse
|
2
|
Ma Y, Peng Y. Mammogram mass segmentation and classification based on cross-view VAE and spatial hidden factor disentanglement. Phys Eng Sci Med 2024; 47:223-238. [PMID: 38150059 DOI: 10.1007/s13246-023-01359-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 11/19/2023] [Indexed: 12/28/2023]
Abstract
Breast masses are the most important clinical findings of breast carcinomas. The mass segmentation and classification in mammograms remain a crucial yet challenging topic in computer-aided diagnosis systems, as the masses show their irregularities in shape, size and texture. In this paper, we propose a new framework for mammogram mass classification and segmentation. Specifically, to utilize the complementary information within the mammographic cross-views, cranio caudal and mediolateral oblique, a cross-view based variational autoencoder (CV-VAE) combined with a spatial hidden factor disentanglement module is presented, where the two views can be reconstructed from each other through two explicitly disentangled hidden factors: class related (specified) and background common (unspecified). Then, the specified factor is not only divided into two categories: benign and malignant by a new introduced feature pyramid networks based mass classifier, but also used to predict the mass mask label based on a U-Net-like decoder. By integrating the two complementary modules, more discriminative morphological and semantic features can be learned to solve the mass classification and segmentation problems simultaneously. The proposed method is evaluated on two most used public mammography datasets, CBIS-DDSM and INbreast, achieving the Dice similarity coefficient (DSC) of 92.46% and 93.70% for segmentation and the area under receiver operating characteristic curve (AUC) of 93.20% and 95.01% for classification, respectively. Compared with other state-of-the-art approaches, it gives competitive results.
Collapse
Affiliation(s)
- Yingran Ma
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, CO, China
| | - Yanjun Peng
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, CO, China.
- Shandong Province Key Laboratory of Wisdom Mining Information Technology, Shandong University of Science and Technology, Qingdao, 266590, CO, China.
| |
Collapse
|
3
|
Yang B, Pan M, Feng K, Wu X, Yang F, Yang P. Identification of the feature genes involved in cytokine release syndrome in COVID-19. PLoS One 2024; 19:e0296030. [PMID: 38165854 PMCID: PMC10760774 DOI: 10.1371/journal.pone.0296030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 12/04/2023] [Indexed: 01/04/2024] Open
Abstract
OBJECTIVE Screening of feature genes involved in cytokine release syndrome (CRS) from the coronavirus disease 19 (COVID-19). METHODS The data sets related to COVID-19 were retrieved using Gene Expression Omnibus (GEO) database, the differentially expressed genes (DEGs) related to CRS were analyzed with R software and Venn diagram, and the biological processes and signaling pathways involved in DEGs were analyzed with GO and KEGG enrichment. Core genes were screened using Betweenness and MCC algorithms. GSE164805 and GSE171110 dataset were used to verify the expression level of core genes. Immunoinfiltration analysis was performed by ssGSEA algorithm in the GSVA package. The DrugBank database was used to analyze the feature genes for potential therapeutic drugs. RESULTS This study obtained 6950 DEGs, of which 971 corresponded with CRS disease genes (common genes). GO and KEGG enrichment showed that multiple biological processes and signaling pathways associated with common genes were closely related to the inflammatory response. Furthermore, the analysis revealed that transcription factors that regulate these common genes are also involved in inflammatory response. Betweenness and MCC algorithms were used for common gene screening, yielding seven key genes. GSE164805 and GSE171110 dataset validation revealed significant differences between the COVID-19 and normal controls in four core genes (feature genes), namely IL6R, TLR4, TLR2, and IFNG. The upregulated IL6R, TLR4, and TLR2 genes were mainly involved in the Toll-like receptor signaling pathway of the inflammatory pathway, while the downregulated IFNG genes primarily participated in the necroptosis and JAK-STAT signaling pathways. Moreover, immune infiltration analysis indicated that higher expression of these genes was associated with immune cell infiltration that mediates inflammatory response. In addition, potential therapeutic drugs for these four feature genes were identified via the DrugBank database. CONCLUSION IL6R, TLR4, TLR2, and IFNG may be potential pathogenic genes and therapeutic targets for the CRS associated with COVID-19.
Collapse
Affiliation(s)
- Bing Yang
- The Second Affiliated Hospital, Guizhou University of Traditional Chinese Medicine, Guiyang, China
| | - Meijun Pan
- The Second Affiliated Hospital, Guizhou University of Traditional Chinese Medicine, Guiyang, China
| | - Kai Feng
- The Second Affiliated Hospital, Guizhou University of Traditional Chinese Medicine, Guiyang, China
| | - Xue Wu
- The Second Affiliated Hospital, Guizhou University of Traditional Chinese Medicine, Guiyang, China
| | - Fang Yang
- The Second Affiliated Hospital, Guizhou University of Traditional Chinese Medicine, Guiyang, China
| | - Peng Yang
- The Second Affiliated Hospital, Guizhou University of Traditional Chinese Medicine, Guiyang, China
| |
Collapse
|
4
|
Liao T, Li L, Ouyang R, Lin X, Lai X, Cheng G, Ma J. Classification of asymmetry in mammography via the DenseNet convolutional neural network. Eur J Radiol Open 2023; 11:100502. [PMID: 37448557 PMCID: PMC10336404 DOI: 10.1016/j.ejro.2023.100502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 06/23/2023] [Indexed: 07/15/2023] Open
Abstract
Purpose To investigate the effectiveness of a deep learning system based on the DenseNet convolutional neural network in diagnosing benign and malignant asymmetric lesions in mammography. Methods Clinical and image data from 460 women aged 23-82 years (47.57 ± 8.73 years) with asymmetric lesions who underwent mammography at Shenzhen People's Hospital, Shenzhen Luohu District People's Hospital, and Shenzhen Hospital of Peking University from December 2019 to December 2020 were retrospectively analyzed. Two senior radiologists, two junior radiologists, and the DL system read the mammographic images of 460 patients, respectively, and finally recorded the BI-RADS classification of asymmetric lesions. We then used the area under the curve (AUC) of the receiver operating characteristic (ROC) to evaluate the diagnostic efficacy and the difference between AUCs by the Delong method. Results Specificity (0.909 vs. 0.835, 0.790, χ2=8.21 and 17.22, p<0.05) and precision (0.872 vs. 0.763, 0.726, χ2=9.23 and 5.22, p<0.05) of the DL system in the diagnosis of benign and malignant asymmetric lesions were higher than those of junior radiologist A and B, and there was a statistically significant difference between AUCs (0.778 vs. 0.579, 0.564, Z = 4.033 and 4.460, p<0.05). Furthermore, the AUC (0.778 vs. 0.904, 0.862, Z = 3.191, and 2.167, p<0.05) of benign and malignant asymmetric lesions diagnosed by the DL system was lower than that of senior radiologist A and senior radiologist B. Conclusions The DL system based on the DenseNet convolution neural network has high diagnostic efficiency, which can help junior radiologists evaluate benign and malignant asymmetric lesions more accurately. It can also improve diagnostic accuracy and reduce missed diagnoses caused by inexperienced junior radiologists.
Collapse
Affiliation(s)
- Tingting Liao
- Department of Radiology, The Second Clinical Medical College of Jinan University, Shenzhen 518020, China
| | - Lin Li
- Department of Radiology, The Second Clinical Medical College of Jinan University, Shenzhen 518020, China
| | - Rushan Ouyang
- Department of Radiology, The Second Clinical Medical College of Jinan University, Shenzhen 518020, China
| | - Xiaohui Lin
- Department of Radiology, Shenzhen People′s Hospital, the Second Clinical Medical College, Jinan University, Shenzhen 518020, China
| | - Xiaohui Lai
- Department of Radiology, Luohu People′s Hospital, Shenzhen 518005, China
| | - Guanxun Cheng
- Department of Radiology, Peking University Shenzhen Hospital, Shenzhen 518036, China
| | - Jie Ma
- Department of Radiology, Shenzhen People′s Hospital, the Second Clinical Medical College, Jinan University, Shenzhen 518020, China
| |
Collapse
|
5
|
Liu Z, Li H, Li W, Zhang F, Ouyang W, Wang S, Zhi A, Pan X. Development of an Expert-Level Right Ventricular Abnormality Detection Algorithm Based on Deep Learning. Interdiscip Sci 2023; 15:653-662. [PMID: 37470945 DOI: 10.1007/s12539-023-00581-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 07/06/2023] [Accepted: 07/10/2023] [Indexed: 07/21/2023]
Abstract
PURPOSE Studies relating to the right ventricle (RV) are inadequate, and specific diagnostic algorithms still need to be improved. This essay is designed to make exploration and verification on an algorithm of deep learning based on imaging and clinical data to detect RV abnormalities. METHODS The Automated Cardiac Diagnosis Challenge dataset includes 20 subjects with RV abnormalities (an RV cavity volume which is higher than 110 mL/m2 or RV ejection fraction which is lower than 40%) and 20 normal subjects who suffered from both cardiac MRI. The subjects were separated into training and validation sets in a ratio of 7:3 and were modeled by utilizing a nerve net of deep-learning and six machine-learning algorithms. Eight MRI specialists from multiple centers independently determined whether each subject in the validation group had RV abnormalities. Model performance was evaluated based on the AUC, accuracy, recall, sensitivity and specificity. Furthermore, a preliminary assessment of patient disease risk was performed based on clinical information using a nomogram. RESULTS The deep-learning neural network outperformed the other six machine-learning algorithms, with an AUC value of 1 (95% confidence interval: 1-1) on both training group and validation group. This algorithm surpassed most human experts (87.5%). In addition, the nomogram model could evaluate a population with a disease risk of 0.2-0.8. CONCLUSIONS A deep-learning algorithm could effectively identify patients with RV abnormalities. This AI algorithm developed specifically for right ventricular abnormalities will improve the detection of right ventricular abnormalities at all levels of care units and facilitate the timely diagnosis and treatment of related diseases. In addition, this study is the first to validate the algorithm's ability to classify RV abnormalities by comparing it with human experts.
Collapse
Affiliation(s)
- Zeye Liu
- Department of Structural Heart Disease, National Center for Cardiovascular Disease, China and Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100037, China
- National Health Commission Key Laboratory of Cardiovascular Regeneration Medicine, Beijing, 100037, China
- Key Laboratory of Innovative Cardiovascular Devices, Chinese Academy of Medical Sciences, Beijing, 100037, China
- National Clinical Research Center for Cardiovascular Diseases, Fuwai Hospital, Chinese Academy of Medical Sciences, Beijing, 100037, China
| | - Hang Li
- Department of Structural Heart Disease, National Center for Cardiovascular Disease, China and Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100037, China
- National Health Commission Key Laboratory of Cardiovascular Regeneration Medicine, Beijing, 100037, China
- Key Laboratory of Innovative Cardiovascular Devices, Chinese Academy of Medical Sciences, Beijing, 100037, China
- National Clinical Research Center for Cardiovascular Diseases, Fuwai Hospital, Chinese Academy of Medical Sciences, Beijing, 100037, China
| | - Wenchao Li
- Pediatric Cardiac Surgery, Henan Provincial People's Hospital, Huazhong Fuwai Hospital, Zhengzhou University People's Hospital, Zhengzhou, 450000, China
| | - Fengwen Zhang
- Department of Structural Heart Disease, National Center for Cardiovascular Disease, China and Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100037, China
- National Health Commission Key Laboratory of Cardiovascular Regeneration Medicine, Beijing, 100037, China
- Key Laboratory of Innovative Cardiovascular Devices, Chinese Academy of Medical Sciences, Beijing, 100037, China
- National Clinical Research Center for Cardiovascular Diseases, Fuwai Hospital, Chinese Academy of Medical Sciences, Beijing, 100037, China
| | - Wenbin Ouyang
- Department of Structural Heart Disease, National Center for Cardiovascular Disease, China and Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100037, China
- National Health Commission Key Laboratory of Cardiovascular Regeneration Medicine, Beijing, 100037, China
- Key Laboratory of Innovative Cardiovascular Devices, Chinese Academy of Medical Sciences, Beijing, 100037, China
- National Clinical Research Center for Cardiovascular Diseases, Fuwai Hospital, Chinese Academy of Medical Sciences, Beijing, 100037, China
| | - Shouzheng Wang
- Department of Structural Heart Disease, National Center for Cardiovascular Disease, China and Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100037, China
- National Health Commission Key Laboratory of Cardiovascular Regeneration Medicine, Beijing, 100037, China
- Key Laboratory of Innovative Cardiovascular Devices, Chinese Academy of Medical Sciences, Beijing, 100037, China
- National Clinical Research Center for Cardiovascular Diseases, Fuwai Hospital, Chinese Academy of Medical Sciences, Beijing, 100037, China
| | - Aihua Zhi
- Department of Medical Imaging, Fuwai Yunnan Cardiovascular Hospital, Kunming, 650000, China
| | - Xiangbin Pan
- Department of Structural Heart Disease, National Center for Cardiovascular Disease, China and Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100037, China.
- National Health Commission Key Laboratory of Cardiovascular Regeneration Medicine, Beijing, 100037, China.
- Key Laboratory of Innovative Cardiovascular Devices, Chinese Academy of Medical Sciences, Beijing, 100037, China.
- National Clinical Research Center for Cardiovascular Diseases, Fuwai Hospital, Chinese Academy of Medical Sciences, Beijing, 100037, China.
| |
Collapse
|
6
|
You C, Shen Y, Sun S, Zhou J, Li J, Su G, Michalopoulou E, Peng W, Gu Y, Guo W, Cao H. Artificial intelligence in breast imaging: Current situation and clinical challenges. EXPLORATION (BEIJING, CHINA) 2023; 3:20230007. [PMID: 37933287 PMCID: PMC10582610 DOI: 10.1002/exp.20230007] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 04/30/2023] [Indexed: 11/08/2023]
Abstract
Breast cancer ranks among the most prevalent malignant tumours and is the primary contributor to cancer-related deaths in women. Breast imaging is essential for screening, diagnosis, and therapeutic surveillance. With the increasing demand for precision medicine, the heterogeneous nature of breast cancer makes it necessary to deeply mine and rationally utilize the tremendous amount of breast imaging information. With the rapid advancement of computer science, artificial intelligence (AI) has been noted to have great advantages in processing and mining of image information. Therefore, a growing number of scholars have started to focus on and research the utility of AI in breast imaging. Here, an overview of breast imaging databases and recent advances in AI research are provided, the challenges and problems in this field are discussed, and then constructive advice is further provided for ongoing scientific developments from the perspective of the National Natural Science Foundation of China.
Collapse
Affiliation(s)
- Chao You
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Yiyuan Shen
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Shiyun Sun
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Jiayin Zhou
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Jiawei Li
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Guanhua Su
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
- Department of Breast SurgeryKey Laboratory of Breast Cancer in ShanghaiFudan University Shanghai Cancer CenterShanghaiChina
| | | | - Weijun Peng
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Yajia Gu
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Weisheng Guo
- Department of Minimally Invasive Interventional RadiologyKey Laboratory of Molecular Target and Clinical PharmacologySchool of Pharmaceutical Sciences and The Second Affiliated HospitalGuangzhou Medical UniversityGuangzhouChina
| | - Heqi Cao
- Department of Health SciencesNational Natural Science Foundation of ChinaBeijingChina
| |
Collapse
|
7
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
8
|
Razali NF, Isa IS, Sulaiman SN, A. Karim NK, Osman MK. CNN-Wavelet scattering textural feature fusion for classifying breast tissue in mammograms. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
|
9
|
Jain S, Naicker D, Raj R, Patel V, Hu YC, Srinivasan K, Jen CP. Computational Intelligence in Cancer Diagnostics: A Contemporary Review of Smart Phone Apps, Current Problems, and Future Research Potentials. Diagnostics (Basel) 2023; 13:diagnostics13091563. [PMID: 37174954 PMCID: PMC10178016 DOI: 10.3390/diagnostics13091563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Revised: 04/16/2023] [Accepted: 04/24/2023] [Indexed: 05/15/2023] Open
Abstract
Cancer is a dangerous and sometimes life-threatening disease that can have several negative consequences for the body, is a leading cause of mortality, and is becoming increasingly difficult to detect. Each form of cancer has its own set of traits, symptoms, and therapies, and early identification and management are important for a positive prognosis. Doctors utilize a variety of approaches to detect cancer, depending on the kind and location of the tumor. Imaging tests such as X-rays, Computed Tomography scans, Magnetic Resonance Imaging scans, and Positron Emission Tomography (PET) scans, which may provide precise pictures of the body's interior structures to spot any abnormalities, are some of the tools that doctors use to diagnose cancer. This article evaluates computational-intelligence approaches and provides a means to impact future work by focusing on the relevance of machine learning and deep learning models such as K Nearest Neighbour (KNN), Support Vector Machine (SVM), Naïve Bayes, Decision Tree, Deep Neural Network, Deep Boltzmann machine, and so on. It evaluates information from 114 studies using Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR). This article explores the advantages and disadvantages of each model and provides an outline of how they are used in cancer diagnosis. In conclusion, artificial intelligence shows significant potential to enhance cancer imaging and diagnosis, despite the fact that there are a number of clinical issues that need to be addressed.
Collapse
Affiliation(s)
- Somit Jain
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Dharmik Naicker
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Ritu Raj
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Vedanshu Patel
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Yuh-Chung Hu
- Department of Mechanical and Electromechanical Engineering, National ILan University, Yilan 26047, Taiwan
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore 632014, India
| | - Chun-Ping Jen
- School of Dentistry, College of Dental Medicine, Kaohsiung Medical University, Kaohsiung 80708, Taiwan
- Department of Mechanical Engineering and Advanced Institute of Manufacturing for High-Tech Innovations, National Chung Cheng University, Chia-Yi 62102, Taiwan
| |
Collapse
|
10
|
Razali NF, Isa IS, Sulaiman SN, Abdul Karim NK, Osman MK, Che Soh ZH. Enhancement Technique Based on the Breast Density Level for Mammogram for Computer-Aided Diagnosis. Bioengineering (Basel) 2023; 10:bioengineering10020153. [PMID: 36829647 PMCID: PMC9952042 DOI: 10.3390/bioengineering10020153] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 01/04/2023] [Accepted: 01/16/2023] [Indexed: 01/26/2023] Open
Abstract
Mass detection in mammograms has a limited approach to the presence of a mass in overlapping denser fibroglandular breast regions. In addition, various breast density levels could decrease the learning system's ability to extract sufficient feature descriptors and may result in lower accuracy performance. Therefore, this study is proposing a textural-based image enhancement technique named Spatial-based Breast Density Enhancement for Mass Detection (SbBDEM) to boost textural features of the overlapped mass region based on the breast density level. This approach determines the optimal exposure threshold of the images' lower contrast limit and optimizes the parameters by selecting the best intensity factor guided by the best Blind/Reference-less Image Spatial Quality Evaluator (BRISQUE) scores separately for both dense and non-dense breast classes prior to training. Meanwhile, a modified You Only Look Once v3 (YOLOv3) architecture is employed for mass detection by specifically assigning an extra number of higher-valued anchor boxes to the shallower detection head using the enhanced image. The experimental results show that the use of SbBDEM prior to training mass detection promotes superior performance with an increase in mean Average Precision (mAP) of 17.24% improvement over the non-enhanced trained image for mass detection, mass segmentation of 94.41% accuracy, and 96% accuracy for benign and malignant mass classification. Enhancing the mammogram images based on breast density is proven to increase the overall system's performance and can aid in an improved clinical diagnosis process.
Collapse
Affiliation(s)
- Noor Fadzilah Razali
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
| | - Iza Sazanita Isa
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
- Correspondence:
| | - Siti Noraini Sulaiman
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
- Integrative Pharmacogenomics Institute (iPROMISE), Universiti Teknologi MARA Cawangan Selangor, Puncak Alam Campus, Puncak Alam 42300, Selangor, Malaysia
| | - Noor Khairiah Abdul Karim
- Department of Biomedical Imaging, Advanced Medical and Dental Institute, Universiti Sains Malaysia Bertam, Kepala Batas 13200, Pulau Pinang, Malaysia
- Breast Cancer Translational Research Programme (BCTRP), Advanced Medical and Dental Institute, Universiti Sains Malaysia Bertam, Kepala Batas 13200, Pulau Pinang, Malaysia
| | - Muhammad Khusairi Osman
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
| | - Zainal Hisham Che Soh
- Centre for Electrical Engineering Studies, Universiti Teknologi MARA, Cawangan Pulau Pinang, Permatang Pauh Campus, Bukit Mertajam 13500, Pulau Pinang, Malaysia
| |
Collapse
|
11
|
Yan Y, Jiang W, Zhou Y, Yu Y, Huang L, Wan S, Zheng H, Tian M, Wu H, Huang L, Wu L, Cheng S, Gao Y, Mao J, Wang Y, Cong Y, Deng Q, Shi X, Yang Z, Miao Q, Zheng B, Wang Y, Yang Y. Evaluation of a computer-aided diagnostic model for corneal diseases by analyzing in vivo confocal microscopy images. Front Med (Lausanne) 2023; 10:1164188. [PMID: 37153082 PMCID: PMC10157182 DOI: 10.3389/fmed.2023.1164188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Accepted: 03/30/2023] [Indexed: 05/09/2023] Open
Abstract
Objective In order to automatically and rapidly recognize the layers of corneal images using in vivo confocal microscopy (IVCM) and classify them into normal and abnormal images, a computer-aided diagnostic model was developed and tested based on deep learning to reduce physicians' workload. Methods A total of 19,612 corneal images were retrospectively collected from 423 patients who underwent IVCM between January 2021 and August 2022 from Renmin Hospital of Wuhan University (Wuhan, China) and Zhongnan Hospital of Wuhan University (Wuhan, China). Images were then reviewed and categorized by three corneal specialists before training and testing the models, including the layer recognition model (epithelium, bowman's membrane, stroma, and endothelium) and diagnostic model, to identify the layers of corneal images and distinguish normal images from abnormal images. Totally, 580 database-independent IVCM images were used in a human-machine competition to assess the speed and accuracy of image recognition by 4 ophthalmologists and artificial intelligence (AI). To evaluate the efficacy of the model, 8 trainees were employed to recognize these 580 images both with and without model assistance, and the results of the two evaluations were analyzed to explore the effects of model assistance. Results The accuracy of the model reached 0.914, 0.957, 0.967, and 0.950 for the recognition of 4 layers of epithelium, bowman's membrane, stroma, and endothelium in the internal test dataset, respectively, and it was 0.961, 0.932, 0.945, and 0.959 for the recognition of normal/abnormal images at each layer, respectively. In the external test dataset, the accuracy of the recognition of corneal layers was 0.960, 0.965, 0.966, and 0.964, respectively, and the accuracy of normal/abnormal image recognition was 0.983, 0.972, 0.940, and 0.982, respectively. In the human-machine competition, the model achieved an accuracy of 0.929, which was similar to that of specialists and higher than that of senior physicians, and the recognition speed was 237 times faster than that of specialists. With model assistance, the accuracy of trainees increased from 0.712 to 0.886. Conclusion A computer-aided diagnostic model was developed for IVCM images based on deep learning, which rapidly recognized the layers of corneal images and classified them as normal and abnormal. This model can increase the efficacy of clinical diagnosis and assist physicians in training and learning for clinical purposes.
Collapse
Affiliation(s)
- Yulin Yan
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Weiyan Jiang
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Yiwen Zhou
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Yi Yu
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Linying Huang
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Shanshan Wan
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Hongmei Zheng
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Miao Tian
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Huiling Wu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Li Huang
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Lianlian Wu
- Department of Gastroenterology, Renmin Hospital of Wuhan University, Wuhan, Hubei, China
| | - Simin Cheng
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Yuelan Gao
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Jiewen Mao
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Yujin Wang
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Yuyu Cong
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Qian Deng
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Xiaoshuo Shi
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Zixian Yang
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Qingmei Miao
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
| | - Biqing Zheng
- Department of Resources and Environmental Sciences, Resources and Environmental Sciences of Wuhan University, Wuhan, Hubei Province, China
| | - Yujing Wang
- Department of Ophthalmology, Zhongnan Hospital of Wuhan University, Wuhan, Hubei, China
| | - Yanning Yang
- Department of Ophthalmology, Renmin Hospital of Wuhan University, Wuhan, Hubei Province, China
- *Correspondence: Yanning Yang,
| |
Collapse
|
12
|
Zhang J, Yang X, Chen J, Han J, Chen X, Fan Y, Zheng H. Construction of a diagnostic classifier for cervical intraepithelial neoplasia and cervical cancer based on XGBoost feature selection and random forest model. J Obstet Gynaecol Res 2023; 49:296-303. [PMID: 36220631 DOI: 10.1111/jog.15458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 08/18/2022] [Accepted: 09/23/2022] [Indexed: 01/19/2023]
Abstract
BACKGROUND The pathological phenotype of early-stage cervical cancer (CC) is similar to that of cervical intraepithelial neoplasia (CIN), which provides a challenge for the diagnosis of cervical precancerous lesions. Meanwhile, the existing diagnostic methods have certain subjectivity and limitations, resulting in the possibility of misdiagnosis or missed diagnosis. Hence, some methods are needed to assist diagnosis of CC and CIN. METHODS Based on the data of CIN and CC in gene expression omnibus (GEO) dataset, the eXtreme Gradient Boosting (XGBoost) algorithm was used to screen the feature genes between CIN and CC for constructing the classifier. Incremental feature selection (IFS) curve was also used for screening. The classifier was validated for reliability using principal component analysis (PCA) dimensionality reduction analysis and heat map analysis of gene expression. Then, differentially expressed genes of CIN and CC were intersected with the classifier genes. Genes in the intersection were used as seeds for protein-protein interaction network construction and restart random walk analysis. And the genes with the top 50 affinity coefficients were selected for gene ontology (GO) and kyoto encyclopedia of genes and genome (KEGG) enrichment analyses to observe the biological functions with differences between CIN and CC. RESULTS The peripheral blood genes of CIN and CC were analyzed, and seven genes were screened. Using this gene for classifier construction, IFS curve screening revealed that the three-feature gene classifier constructed according to the random forest model had the best effect. The results of PCA dimensionality reduction analysis and gene expression heat map analysis showed that the three-gene classifier could effectively distinguish CIN from CC. CONCLUSION A three-gene diagnostic classifier can effectively distinguish CIN patients from CC patients and provide a reference for the clinical diagnosis of early CC.
Collapse
Affiliation(s)
- Jing Zhang
- Department of Gynaecology and Obstetrics, Jiangsu Xiangshui Hospital of Chinese Medicine, Yancheng, Jiangsu, China
| | - Xiuqing Yang
- Department of Gynaecology and Obstetrics, Jiangsu Xiangshui Hospital of Chinese Medicine, Yancheng, Jiangsu, China
| | - Jia Chen
- Department of Gynaecology and Obstetrics, Jiangsu Xiangshui Hospital of Chinese Medicine, Yancheng, Jiangsu, China
| | - Jing Han
- Department of Gynaecology and Obstetrics, Jiangsu Xiangshui Hospital of Chinese Medicine, Yancheng, Jiangsu, China
| | - Xiaofeng Chen
- Department of Gynaecology and Obstetrics, Jiangsu Xiangshui Hospital of Chinese Medicine, Yancheng, Jiangsu, China
| | - Yueping Fan
- Department of Gynaecology and Obstetrics, Jiangsu Xiangshui Hospital of Chinese Medicine, Yancheng, Jiangsu, China
| | - Hui Zheng
- Department of Gynaecology and Obstetrics, Jiangsu Xiangshui Hospital of Chinese Medicine, Yancheng, Jiangsu, China
| |
Collapse
|
13
|
Ovarian tumor diagnosis using deep convolutional neural networks and a denoising convolutional autoencoder. Sci Rep 2022; 12:17024. [PMID: 36220853 PMCID: PMC9554195 DOI: 10.1038/s41598-022-20653-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 09/16/2022] [Indexed: 01/27/2023] Open
Abstract
Discrimination of ovarian tumors is necessary for proper treatment. In this study, we developed a convolutional neural network model with a convolutional autoencoder (CNN-CAE) to classify ovarian tumors. A total of 1613 ultrasound images of ovaries with known pathological diagnoses were pre-processed and augmented for deep learning analysis. We designed a CNN-CAE model that removes the unnecessary information (e.g., calipers and annotations) from ultrasound images and classifies ovaries into five classes. We used fivefold cross-validation to evaluate the performance of the CNN-CAE model in terms of accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC). Gradient-weighted class activation mapping (Grad-CAM) was applied to visualize and verify the CNN-CAE model results qualitatively. In classifying normal versus ovarian tumors, the CNN-CAE model showed 97.2% accuracy, 97.2% sensitivity, and 0.9936 AUC with DenseNet121 CNN architecture. In distinguishing malignant ovarian tumors, the CNN-CAE model showed 90.12% accuracy, 86.67% sensitivity, and 0.9406 AUC with DenseNet161 CNN architecture. Grad-CAM showed that the CNN-CAE model recognizes valid texture and morphology features from the ultrasound images and classifies ovarian tumors from these features. CNN-CAE is a feasible diagnostic tool that is capable of robustly classifying ovarian tumors by eliminating marks on ultrasound images. CNN-CAE demonstrates an important application value in clinical conditions.
Collapse
|
14
|
An integrated framework for breast mass classification and diagnosis using stacked ensemble of residual neural networks. Sci Rep 2022; 12:12259. [PMID: 35851592 PMCID: PMC9293883 DOI: 10.1038/s41598-022-15632-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 06/27/2022] [Indexed: 11/16/2022] Open
Abstract
A computer-aided diagnosis (CAD) system requires automated stages of tumor detection, segmentation, and classification that are integrated sequentially into one framework to assist the radiologists with a final diagnosis decision. In this paper, we introduce the final step of breast mass classification and diagnosis using a stacked ensemble of residual neural network (ResNet) models (i.e. ResNet50V2, ResNet101V2, and ResNet152V2). The work presents the task of classifying the detected and segmented breast masses into malignant or benign, and diagnosing the Breast Imaging Reporting and Data System (BI-RADS) assessment category with a score from 2 to 6 and the shape as oval, round, lobulated, or irregular. The proposed methodology was evaluated on two publicly available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Comparative experiments were conducted on the individual models and an average ensemble of models with an XGBoost classifier. Qualitative and quantitative results show that the proposed model achieved better performance for (1) Pathology classification with an accuracy of 95.13%, 99.20%, and 95.88%; (2) BI-RADS category classification with an accuracy of 85.38%, 99%, and 96.08% respectively on CBIS-DDSM, INbreast, and the private dataset; and (3) shape classification with 90.02% on the CBIS-DDSM dataset. Our results demonstrate that our proposed integrated framework could benefit from all automated stages to outperform the latest deep learning methodologies.
Collapse
|
15
|
Dumortier L, Guépin F, Delignette-Muller ML, Boulocher C, Grenier T. Deep learning in veterinary medicine, an approach based on CNN to detect pulmonary abnormalities from lateral thoracic radiographs in cats. Sci Rep 2022; 12:11418. [PMID: 35794167 PMCID: PMC9258008 DOI: 10.1038/s41598-022-14993-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Accepted: 06/16/2022] [Indexed: 12/03/2022] Open
Abstract
Thoracic radiograph (TR) is a complementary exam widely used in small animal medicine which requires a sharp analysis to take full advantage of Radiographic Pulmonary Pattern (RPP). Although promising advances have been made in deep learning for veterinary imaging, the development of a Convolutional Neural Networks (CNN) to detect specifically RPP from feline TR images has not been investigated. Here, a CNN based on ResNet50V2 and pre-trained on ImageNet is first fine-tuned on human Chest X-rays and then fine-tuned again on 500 annotated TR images from the veterinary campus of VetAgro Sup (Lyon, France). The impact of manual segmentation of TR’s intrathoracic area and enhancing contrast method on the CNN’s performances has been compared. To improve classification performances, 200 networks were trained on random shuffles of training set and validation set. A voting approach over these 200 networks trained on segmented TR images produced the best classification performances and achieved mean Accuracy, F1-Score, Specificity, Positive Predictive Value and Sensitivity of 82%, 85%, 75%, 81% and 88% respectively on the test set. Finally, the classification schemes were discussed in the light of an ensemble method of class activation maps and confirmed that the proposed approach is helpful for veterinarians.
Collapse
|
16
|
Ai Z, Huang X, Feng J, Wang H, Tao Y, Zeng F, Lu Y. FN-OCT: Disease Detection Algorithm for Retinal Optical Coherence Tomography Based on a Fusion Network. Front Neuroinform 2022; 16:876927. [PMID: 35784186 PMCID: PMC9243322 DOI: 10.3389/fninf.2022.876927] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 05/04/2022] [Indexed: 01/31/2023] Open
Abstract
Optical coherence tomography (OCT) is a new type of tomography that has experienced rapid development and potential in recent years. It is playing an increasingly important role in retinopathy diagnoses. At present, due to the uneven distributions of medical resources in various regions, the uneven proficiency levels of doctors in grassroots and remote areas, and the development needs of rare disease diagnosis and precision medicine, artificial intelligence technology based on deep learning can provide fast, accurate, and effective solutions for the recognition and diagnosis of retinal OCT images. To prevent vision damage and blindness caused by the delayed discovery of retinopathy, a fusion network (FN)-based retinal OCT classification algorithm (FN-OCT) is proposed in this paper to improve upon the adaptability and accuracy of traditional classification algorithms. The InceptionV3, Inception-ResNet, and Xception deep learning algorithms are used as base classifiers, a convolutional block attention mechanism (CBAM) is added after each base classifier, and three different fusion strategies are used to merge the prediction results of the base classifiers to output the final prediction results (choroidal neovascularization (CNV), diabetic macular oedema (DME), drusen, normal). The results show that in a classification problem involving the UCSD common retinal OCT dataset (108,312 OCT images from 4,686 patients), compared with that of the InceptionV3 network model, the prediction accuracy of FN-OCT is improved by 5.3% (accuracy = 98.7%, area under the curve (AUC) = 99.1%). The predictive accuracy and AUC achieved on an external dataset for the classification of retinal OCT diseases are 92 and 94.5%, respectively, and gradient-weighted class activation mapping (Grad-CAM) is used as a visualization tool to verify the effectiveness of the proposed FNs. This finding indicates that the developed fusion algorithm can significantly improve the performance of classifiers while providing a powerful tool and theoretical support for assisting with the diagnosis of retinal OCT.
Collapse
Affiliation(s)
- Zhuang Ai
- Department of Research and Development, Sinopharm Genomics Technology Co., Ltd., Jiangsu, China
| | - Xuan Huang
- Department of Ophthalmology, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
- Medical Research Center, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Jing Feng
- Department of Ophthalmology, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Hui Wang
- Department of Ophthalmology, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Yong Tao
- Department of Ophthalmology, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Fanxin Zeng
- Department of Clinical Research Center, Dazhou Central Hospital, Sichuan, China
| | - Yaping Lu
- Department of Research and Development, Sinopharm Genomics Technology Co., Ltd., Jiangsu, China
| |
Collapse
|
17
|
Zhang C, Zhao J, Zhu Z, Li Y, Li K, Wang Y, Zheng Y. Applications of Artificial Intelligence in Myopia: Current and Future Directions. Front Med (Lausanne) 2022; 9:840498. [PMID: 35360739 PMCID: PMC8962670 DOI: 10.3389/fmed.2022.840498] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 02/15/2022] [Indexed: 12/17/2022] Open
Abstract
With the continuous development of computer technology, big data acquisition and imaging methods, the application of artificial intelligence (AI) in medical fields is expanding. The use of machine learning and deep learning in the diagnosis and treatment of ophthalmic diseases is becoming more widespread. As one of the main causes of visual impairment, myopia has a high global prevalence. Early screening or diagnosis of myopia, combined with other effective therapeutic interventions, is very important to maintain a patient's visual function and quality of life. Through the training of fundus photography, optical coherence tomography, and slit lamp images and through platforms provided by telemedicine, AI shows great application potential in the detection, diagnosis, progression prediction and treatment of myopia. In addition, AI models and wearable devices based on other forms of data also perform well in the behavioral intervention of myopia patients. Admittedly, there are still some challenges in the practical application of AI in myopia, such as the standardization of datasets; acceptance attitudes of users; and ethical, legal and regulatory issues. This paper reviews the clinical application status, potential challenges and future directions of AI in myopia and proposes that the establishment of an AI-integrated telemedicine platform will be a new direction for myopia management in the post-COVID-19 period.
Collapse
|
18
|
Advancements in Oncology with Artificial Intelligence—A Review Article. Cancers (Basel) 2022; 14:cancers14051349. [PMID: 35267657 PMCID: PMC8909088 DOI: 10.3390/cancers14051349] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 02/26/2022] [Accepted: 02/28/2022] [Indexed: 02/05/2023] Open
Abstract
Simple Summary With the advancement of artificial intelligence, including machine learning, the field of oncology has seen promising results in cancer detection and classification, epigenetics, drug discovery, and prognostication. In this review, we describe what artificial intelligence is and its function, as well as comprehensively summarize its evolution and role in breast, colorectal, and central nervous system cancers. Understanding the origin and current accomplishments might be essential to improve the quality, accuracy, generalizability, cost-effectiveness, and reliability of artificial intelligence models that can be used in worldwide clinical practice. Students and researchers in the medical field will benefit from a deeper understanding of how to use integrative AI in oncology for innovation and research. Abstract Well-trained machine learning (ML) and artificial intelligence (AI) systems can provide clinicians with therapeutic assistance, potentially increasing efficiency and improving efficacy. ML has demonstrated high accuracy in oncology-related diagnostic imaging, including screening mammography interpretation, colon polyp detection, glioma classification, and grading. By utilizing ML techniques, the manual steps of detecting and segmenting lesions are greatly reduced. ML-based tumor imaging analysis is independent of the experience level of evaluating physicians, and the results are expected to be more standardized and accurate. One of the biggest challenges is its generalizability worldwide. The current detection and screening methods for colon polyps and breast cancer have a vast amount of data, so they are ideal areas for studying the global standardization of artificial intelligence. Central nervous system cancers are rare and have poor prognoses based on current management standards. ML offers the prospect of unraveling undiscovered features from routinely acquired neuroimaging for improving treatment planning, prognostication, monitoring, and response assessment of CNS tumors such as gliomas. By studying AI in such rare cancer types, standard management methods may be improved by augmenting personalized/precision medicine. This review aims to provide clinicians and medical researchers with a basic understanding of how ML works and its role in oncology, especially in breast cancer, colorectal cancer, and primary and metastatic brain cancer. Understanding AI basics, current achievements, and future challenges are crucial in advancing the use of AI in oncology.
Collapse
|
19
|
Wang Y, Wang Z, Feng Y, Zhang L. WDCCNet: Weighted Double-Classifier Constraint Neural Network for Mammographic Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:559-570. [PMID: 34606448 DOI: 10.1109/tmi.2021.3117272] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The early detection and timely treatment of breast cancer can save lives. Mammography is one of the most efficient approaches to screening early breast cancer. An automatic mammographic image classification method could improve the work efficiency of radiologists. Current deep learning-based methods typically use the traditional softmax loss to optimize the feature extraction part, which aims to learn the features of mammographic images. However, previous studies have shown that the feature extraction part cannot learn discriminative features from complex data using the standard softmax loss. In this paper, we design a new architecture and propose respective loss functions. Specifically, we develop a double-classifier network architecture that constrains the extracted features' distribution by changing the classifiers' decision boundaries. Then, we propose the double-classifier constraint loss function to constrain the decision boundaries so that the feature extraction part can learn discriminative features. Furthermore, by taking advantage of the architecture of two classifiers, the neural network can detect the difficult-to-classify samples. We propose a weighted double-classifier constraint method to make the feature extract part pay more attention to learning difficult-to-classify samples' features. Our proposed method can be easily applied to an existing convolutional neural network to improve mammographic image classification performance. We conducted extensive experiments to evaluate our methods on three public benchmark mammographic image datasets. The results showed that our methods outperformed many other similar methods and state-of-the-art methods on the three public medical benchmarks. Our code and weights can be found on GitHub.
Collapse
|
20
|
Arora G, Dubey AK, Jaffery ZA, Rocha A. A comparative study of fourteen deep learning networks for multi skin lesion classification (MSLC) on unbalanced data. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-06922-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
21
|
Li H, Chen D, Nailon WH, Davies ME, Laurenson DI. Dual Convolutional Neural Networks for Breast Mass Segmentation and Diagnosis in Mammography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3-13. [PMID: 34351855 DOI: 10.1109/tmi.2021.3102622] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Deep convolutional neural networks (CNNs) have emerged as a new paradigm for Mammogram diagnosis. Contemporary CNN-based computer-aided-diagnosis systems (CADs) for breast cancer directly extract latent features from input mammogram image and ignore the importance of morphological features. In this paper, we introduce a novel end-to-end deep learning framework for mammogram image processing, which computes mass segmentation and simultaneously predicts diagnosis results. Specifically, our method is constructed in a dual-path architecture that solves the mapping in a dual-problem manner, with an additional consideration of important shape and boundary knowledge. One path, called the Locality Preserving Learner (LPL), is devoted to hierarchically extracting and exploiting intrinsic features of the input. Whereas the other path, called the Conditional Graph Learner (CGL), focuses on generating geometrical features via modeling pixel-wise image to mask correlations. By integrating the two learners, both the cancer semantics and cancer representations are well learned, and the component learning paths in return complement each other, contributing an improvement to the mass segmentation and cancer classification problem at the same time. In addition, by integrating an automatic detection set-up, the DualCoreNet achieves fully automatic breast cancer diagnosis practically. Experimental results show that in benchmark DDSM dataset, DualCoreNet has outperformed other related works in both segmentation and classification tasks, achieving 92.27% DI coefficient and 0.85 AUC score. In another benchmark INbreast dataset, DualCoreNet achieves the best mammography segmentation (93.69% DI coefficient) and competitive classification performance (0.93 AUC score).
Collapse
|
22
|
Shen D, Pathrose A, Sarnari R, Blake A, Berhane H, Baraboo JJ, Carr JC, Markl M, Kim D. Automated segmentation of biventricular contours in tissue phase mapping using deep learning. NMR IN BIOMEDICINE 2021; 34:e4606. [PMID: 34476863 PMCID: PMC8795858 DOI: 10.1002/nbm.4606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Revised: 07/27/2021] [Accepted: 08/02/2021] [Indexed: 06/13/2023]
Abstract
Tissue phase mapping (TPM) is an MRI technique for quantification of regional biventricular myocardial velocities. Despite its potential, clinical use is limited due to the requisite labor-intensive manual segmentation of cardiac contours for all time frames. The purpose of this study was to develop a deep learning (DL) network for automated segmentation of TPM images, without significant loss in segmentation and myocardial velocity quantification accuracy compared with manual segmentation. We implemented a multi-channel 3D (three dimensional; 2D + time) dense U-Net that trained on magnitude and phase images and combined cross-entropy, Dice, and Hausdorff distance loss terms to improve the segmentation accuracy and suppress unnatural boundaries. The dense U-Net was trained and tested with 150 multi-slice, multi-phase TPM scans (114 scans for training, 36 for testing) from 99 heart transplant patients (44 females, 1-4 scans/patient), where the magnitude and velocity-encoded (Vx , Vy , Vz ) images were used as input and the corresponding manual segmentation masks were used as reference. The accuracy of DL segmentation was evaluated using quantitative metrics (Dice scores, Hausdorff distance) and linear regression and Bland-Altman analyses on the resulting peak radial and longitudinal velocities (Vr and Vz ). The mean segmentation time was about 2 h per patient for manual and 1.9 ± 0.3 s for DL. Our network produced good accuracy (median Dice = 0.85 for left ventricle (LV), 0.64 for right ventricle (RV), Hausdorff distance = 3.17 pixels) compared with manual segmentation. Peak Vr and Vz measured from manual and DL segmentations were strongly correlated (R ≥ 0.88) and in good agreement with manual analysis (mean difference and limits of agreement for Vz and Vr were -0.05 ± 0.98 cm/s and -0.06 ± 1.18 cm/s for LV, and -0.21 ± 2.33 cm/s and 0.46 ± 4.00 cm/s for RV, respectively). The proposed multi-channel 3D dense U-Net was capable of reducing the segmentation time by 3,600-fold, without significant loss in accuracy in tissue velocity measurements.
Collapse
Affiliation(s)
- Daming Shen
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
- Biomedical Engineering, Northwestern University McCormick School of Engineering and Applied Science, Evanston, USA
| | - Ashitha Pathrose
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
| | - Roberto Sarnari
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
| | - Allison Blake
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
| | - Haben Berhane
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
- Biomedical Engineering, Northwestern University McCormick School of Engineering and Applied Science, Evanston, USA
| | - Justin J Baraboo
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
- Biomedical Engineering, Northwestern University McCormick School of Engineering and Applied Science, Evanston, USA
| | - James C Carr
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
| | - Michael Markl
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
- Biomedical Engineering, Northwestern University McCormick School of Engineering and Applied Science, Evanston, USA
| | - Daniel Kim
- Department of Radiology, Northwestern University Feinberg School of Medicine, Chicago, USA
- Biomedical Engineering, Northwestern University McCormick School of Engineering and Applied Science, Evanston, USA
| |
Collapse
|
23
|
Singh A, Bhat V, Sudhakar S, Namachivayam A, Gangadharan C, Pulchan C, Sigamani A. Multicentric study to evaluate the effectiveness of Thermalytix as compared with standard screening modalities in subjects who show possible symptoms of suspected breast cancer. BMJ Open 2021; 11:e052098. [PMID: 34667011 PMCID: PMC8527152 DOI: 10.1136/bmjopen-2021-052098] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
INTRODUCTION Machine learning in computer-assisted diagnostics improves sensitivity of image analysis and reduces time and effort for interpretation. Compared to standard mammograms, a thermal scan is easily scalable and is a safer screening tool. We evaluate the performance of Thermalytix (an automated thermographic screening algorithm) compared with other standard breast cancer screening modalities. METHODS A prospective multicentre study was conducted to assess the non-inferiority of sensitivity of Thermalytix (test device) to that of standard modalities in detecting malignancy in subjects who show possible symptoms of suspected breast cancer. Standard screening modalities and Thermalytix were obtained and interpreted independently in a blinded fashion. A receiver operating characteristic (ROC) curve was constructed to identify the best cut-off point, non-inferiority margin of ≥10% to demonstrate the non-inferiority. RESULTS We recruited 258 symptomatic women who first underwent a thermal scan, followed by mammogram and/or ultrasound. At Youden's Index of ROC curve, the test device had a sensitivity of 82.5% (95% CI 73.2 to 91.9) and specificity of 80.5% (95% CI 75.0 to 86.1) as compared with diagnostic mammogram, which had sensitivity of 92% (95% CI 80.7 to 97.8) and specificity of 45.9% (95% CI 34.3 to 57.9) when BI-RADS 3 (Breast Imaging-Reporting and Data System) was considered as test-positive. The overall area under the curve (AUC) was 0.845. For women aged <45 years, the test device had a sensitivity and specificity of 87.0% (95% CI 66.4 to 97.2) and 80.6% (95% CI 72.9 to 86.9), respectively. For women aged ≥45 years, the sensitivity and specificity were 80.5% (95% CI 65.1 to 91.2) and 86.5% (95% CI 78.0 to 92.6, respectively). CONCLUSION We evaluated Thermalytix, a new AI-based modality for detecting breast cancer. The high AUC in both women under 45 years and above 45 years shows the potential of Thermalytix to be a supplemental diagnostic modality for all ages. Further evaluation on larger sample size is needed. TRIAL REGISTRATION NUMBER CTRI/2017/10/0 10 115.
Collapse
Affiliation(s)
- Akshita Singh
- Department of Surgical Breast Oncology, Mazumdar Shaw Medical Centre, Narayana Hrudayalaya Limited, Narayana Hrudayalaya Health City, Bangalore, Karnataka, India
| | - Venkatraman Bhat
- Department of Radiology, Mazumdar Shaw Medical Centre, Narayana Hrudayalaya Limited, Narayana Hrudayalaya Health City, Bangalore, Karnataka, India
| | - S Sudhakar
- Department of Radiology, HCG Cancer Hospital, HealthCare Global Enterprises Ltd, Bangalore, Karnataka, India
| | | | - Charitha Gangadharan
- Department of Clinical Research, Narayana Hrudayalaya Limited, Narayana Hrudayalaya Health City, Bangalore, Karnataka, India
| | - Candice Pulchan
- Department of Radiology (Ultrasonographer III (Ag)), South-West Regional Health Authority, San Fernando, Trinidad and Tobago
| | - Alben Sigamani
- Department of Clinical Research, Narayana Hrudayalaya Health City, Bangalore, Karnataka, India
| |
Collapse
|
24
|
Mahmood T, Li J, Pei Y, Akhtar F. An Automated In-Depth Feature Learning Algorithm for Breast Abnormality Prognosis and Robust Characterization from Mammography Images Using Deep Transfer Learning. BIOLOGY 2021; 10:859. [PMID: 34571736 PMCID: PMC8468800 DOI: 10.3390/biology10090859] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 08/25/2021] [Accepted: 08/27/2021] [Indexed: 01/17/2023]
Abstract
BACKGROUND Diagnosing breast cancer masses and calcification clusters have paramount significance in mammography, which aids in mitigating the disease's complexities and curing it at early stages. However, a wrong mammogram interpretation may lead to an unnecessary biopsy of the false-positive findings, which reduces the patient's survival chances. Consequently, approaches that learn to discern breast masses can reduce the number of misconceptions and incorrect diagnoses. Conventionally used classification models focus on feature extraction techniques specific to a particular problem based on domain information. Deep learning strategies are becoming promising alternatives to solve the many challenges of feature-based approaches. METHODS This study introduces a convolutional neural network (ConvNet)-based deep learning method to extract features at varying densities and discern mammography's normal and suspected regions. Two different experiments were carried out to make an accurate diagnosis and classification. The first experiment consisted of five end-to-end pre-trained and fine-tuned deep convolution neural networks (DCNN). The in-depth features extracted from the ConvNet are also used to train the support vector machine algorithm to achieve excellent performance in the second experiment. Additionally, DCNN is the most frequently used image interpretation and classification method, including VGGNet, GoogLeNet, MobileNet, ResNet, and DenseNet. Moreover, this study pertains to data cleaning, preprocessing, and data augmentation, and improving mass recognition accuracy. The efficacy of all models is evaluated by training and testing three mammography datasets and has exhibited remarkable results. RESULTS Our deep learning ConvNet+SVM model obtained a discriminative training accuracy of 97.7% and validating accuracy of 97.8%, contrary to this, VGGNet16 method yielded 90.2%, 93.5% for VGGNet19, 63.4% for GoogLeNet, 82.9% for MobileNetV2, 75.1% for ResNet50, and 72.9% for DenseNet121. CONCLUSIONS The proposed model's improvement and validation are appropriated in conventional pathological practices that conceivably reduce the pathologist's strain in predicting clinical outcomes by analyzing patients' mammography images.
Collapse
Affiliation(s)
- Tariq Mahmood
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (T.M.); (J.L.)
- Division of Science and Technology, University of Education, Lahore 54000, Pakistan
| | - Jianqiang Li
- The School of Software Engineering, Beijing University of Technology, Beijing 100024, China; (T.M.); (J.L.)
- Beijing Engineering Research Center for IoT Software and Systems, Beijing 100124, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu 965-8580, Japan
| | - Faheem Akhtar
- Department of Computer Science, Sukkur IBA University, Sukkur 65200, Pakistan;
| |
Collapse
|
25
|
Kazemimoghadam M, Chi W, Rahimi A, Kim N, Alluri P, Nwachukwu C, Lu W, Gu X. Saliency-guided deep learning network for automatic tumor bed volume delineation in post-operative breast irradiation. Phys Med Biol 2021; 66:10.1088/1361-6560/ac176d. [PMID: 34298539 PMCID: PMC8639319 DOI: 10.1088/1361-6560/ac176d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 07/23/2021] [Indexed: 11/12/2022]
Abstract
Efficient, reliable and reproducible target volume delineation is a key step in the effective planning of breast radiotherapy. However, post-operative breast target delineation is challenging as the contrast between the tumor bed volume (TBV) and normal breast tissue is relatively low in CT images. In this study, we propose to mimic the marker-guidance procedure in manual target delineation. We developed a saliency-based deep learning segmentation (SDL-Seg) algorithm for accurate TBV segmentation in post-operative breast irradiation. The SDL-Seg algorithm incorporates saliency information in the form of markers' location cues into a U-Net model. The design forces the model to encode the location-related features, which underscores regions with high saliency levels and suppresses low saliency regions. The saliency maps were generated by identifying markers on CT images. Markers' location were then converted to probability maps using a distance transformation coupled with a Gaussian filter. Subsequently, the CT images and the corresponding saliency maps formed a multi-channel input for the SDL-Seg network. Our in-house dataset was comprised of 145 prone CT images from 29 post-operative breast cancer patients, who received 5-fraction partial breast irradiation (PBI) regimen on GammaPod. The 29 patients were randomly split into training (19), validation (5) and test (5) sets. The performance of the proposed method was compared against basic U-Net. Our model achieved mean (standard deviation) of 76.4(±2.7) %, 6.76(±1.83) mm, and 1.9(±0.66) mm for Dice similarity coefficient, 95 percentile Hausdorff distance, and average symmetric surface distance respectively on the test set with computation time of below 11 seconds per one CT volume. SDL-Seg showed superior performance relative to basic U-Net for all the evaluation metrics while preserving low computation cost. The findings demonstrate that SDL-Seg is a promising approach for improving the efficiency and accuracy of the on-line treatment planning procedure of PBI, such as GammaPod based PBI.
Collapse
Affiliation(s)
- Mahdieh Kazemimoghadam
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Weicheng Chi
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, People's Republic of China
| | - Asal Rahimi
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Nathan Kim
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Prasanna Alluri
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Chika Nwachukwu
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | | | - Xuejun Gu
- Stanford University, Palo Alto, CA, United States of America
| |
Collapse
|
26
|
Breast Cancer Segmentation Methods: Current Status and Future Potentials. BIOMED RESEARCH INTERNATIONAL 2021; 2021:9962109. [PMID: 34337066 PMCID: PMC8321730 DOI: 10.1155/2021/9962109] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 05/14/2021] [Accepted: 06/11/2021] [Indexed: 12/24/2022]
Abstract
Early breast cancer detection is one of the most important issues that need to be addressed worldwide as it can help increase the survival rate of patients. Mammograms have been used to detect breast cancer in the early stages; if detected in the early stages, it can drastically reduce treatment costs. The detection of tumours in the breast depends on segmentation techniques. Segmentation plays a significant role in image analysis and includes detection, feature extraction, classification, and treatment. Segmentation helps physicians quantify the volume of tissue in the breast for treatment planning. In this work, we have grouped segmentation methods into three groups: classical segmentation that includes region-, threshold-, and edge-based segmentation; machine learning segmentation; and supervised and unsupervised and deep learning segmentation. The findings of our study revealed that region-based segmentation is frequently used for classical methods, and the most frequently used techniques are region growing. Further, a median filter is a robust tool for removing noise. Moreover, the MIAS database is frequently used in classical segmentation methods. Meanwhile, in machine learning segmentation, unsupervised machine learning methods are more frequently used, and U-Net is frequently used for mammogram image segmentation because it does not require many annotated images compared with other deep learning models. Furthermore, reviewed papers revealed that it is possible to train a deep learning model without performing any preprocessing or postprocessing and also showed that the U-Net model is frequently used for mammogram segmentation. The U-Net model is frequently used because it does not require many annotated images and also because of the presence of high-performance GPU computing, which makes it easy to train networks with more layers. Additionally, we identified mammograms and utilised widely used databases, wherein 3 and 28 are public and private databases, respectively.
Collapse
|
27
|
Ou WC, Polat D, Dogan BE. Deep learning in breast radiology: current progress and future directions. Eur Radiol 2021; 31:4872-4885. [PMID: 33449174 DOI: 10.1007/s00330-020-07640-9] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 10/30/2020] [Accepted: 12/17/2020] [Indexed: 12/13/2022]
Abstract
This review provides an overview of current applications of deep learning methods within breast radiology. The diagnostic capabilities of deep learning in breast radiology continue to improve, giving rise to the prospect that these methods may be integrated not only into detection and classification of breast lesions, but also into areas such as risk estimation and prediction of tumor responses to therapy. Remaining challenges include limited availability of high-quality data with expert annotations and ground truth determinations, the need for further validation of initial results, and unresolved medicolegal considerations. KEY POINTS: • Deep learning (DL) continues to push the boundaries of what can be accomplished by artificial intelligence (AI) in breast imaging with distinct advantages over conventional computer-aided detection. • DL-based AI has the potential to augment the capabilities of breast radiologists by improving diagnostic accuracy, increasing efficiency, and supporting clinical decision-making through prediction of prognosis and therapeutic response. • Remaining challenges to DL implementation include a paucity of prospective data on DL utilization and yet unresolved medicolegal questions regarding increasing AI utilization.
Collapse
Affiliation(s)
- William C Ou
- Department of Radiology, Seay Biomedical Building, University of Texas Southwestern Medical Center, 2201 Inwood Road, Dallas, TX, 75390, USA.
| | - Dogan Polat
- Department of Radiology, Seay Biomedical Building, University of Texas Southwestern Medical Center, 2201 Inwood Road, Dallas, TX, 75390, USA
| | - Basak E Dogan
- Department of Radiology, Seay Biomedical Building, University of Texas Southwestern Medical Center, 2201 Inwood Road, Dallas, TX, 75390, USA
| |
Collapse
|
28
|
Al-Antari MA, Hua CH, Bang J, Lee S. "Fast deep learning computer-aided diagnosis of COVID-19 based on digital chest x-ray images". APPL INTELL 2020; 51:2890-2907. [PMID: 34764573 PMCID: PMC7695589 DOI: 10.1007/s10489-020-02076-6] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2020] [Indexed: 11/28/2022]
Abstract
Coronavirus disease 2019 (COVID-19) is a novel harmful respiratory disease that has rapidly spread worldwide. At the end of 2019, COVID-19 emerged as a previously unknown respiratory disease in Wuhan, Hubei Province, China. The world health organization (WHO) declared the coronavirus outbreak a pandemic in the second week of March 2020. Simultaneous deep learning detection and classification of COVID-19 based on the full resolution of digital X-ray images is the key to efficiently assisting patients by enabling physicians to reach a fast and accurate diagnosis decision. In this paper, a simultaneous deep learning computer-aided diagnosis (CAD) system based on the YOLO predictor is proposed that can detect and diagnose COVID-19, differentiating it from eight other respiratory diseases: atelectasis, infiltration, pneumothorax, masses, effusion, pneumonia, cardiomegaly, and nodules. The proposed CAD system was assessed via five-fold tests for the multi-class prediction problem using two different databases of chest X-ray images: COVID-19 and ChestX-ray8. The proposed CAD system was trained with an annotated training set of 50,490 chest X-ray images. The regions on the entire X-ray images with lesions suspected of being due to COVID-19 were simultaneously detected and classified end-to-end via the proposed CAD predictor, achieving overall detection and classification accuracies of 96.31% and 97.40%, respectively. Most test images from patients with confirmed COVID-19 and other respiratory diseases were correctly predicted, achieving average intersection over union (IoU) greater than 90%. Applying deep learning regularizers of data balancing and augmentation improved the COVID-19 diagnostic performance by 6.64% and 12.17% in terms of the overall accuracy and the F1-score, respectively. It is feasible to achieve a diagnosis based on individual chest X-ray images with the proposed CAD system within 0.0093 s. Thus, the CAD system presented in this paper can make a prediction at the rate of 108 frames/s (FPS), which is close to real-time. The proposed deep learning CAD system can reliably differentiate COVID-19 from other respiratory diseases. The proposed deep learning model seems to be a reliable tool that can be used to practically assist health care systems, patients, and physicians.
Collapse
Affiliation(s)
- Mugahed A Al-Antari
- Department of Computer Science and Engineering, College of Software, Kyung Hee University, 1732, Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 17104 Republic of Korea.,Department of Biomedical Engineering, Sana'a Community College, Sana'a, Republic of Yemen
| | - Cam-Hao Hua
- Department of Computer Science and Engineering, College of Software, Kyung Hee University, 1732, Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 17104 Republic of Korea
| | - Jaehun Bang
- Department of Computer Science and Engineering, College of Software, Kyung Hee University, 1732, Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 17104 Republic of Korea
| | - Sungyoung Lee
- Department of Computer Science and Engineering, College of Software, Kyung Hee University, 1732, Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 17104 Republic of Korea
| |
Collapse
|
29
|
Liu C, Hu SC, Wang C, Lafata K, Yin FF. Automatic detection of pulmonary nodules on CT images with YOLOv3: development and evaluation using simulated and patient data. Quant Imaging Med Surg 2020; 10:1917-1929. [PMID: 33014725 PMCID: PMC7495314 DOI: 10.21037/qims-19-883] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Accepted: 06/29/2020] [Indexed: 12/15/2022]
Abstract
BACKGROUND To develop a high-efficiency pulmonary nodule computer-aided detection (CAD) method for localization and diameter estimation. METHODS The developed CAD method centralizes a novel convolutional neural network (CNN) algorithm, You Only Look Once (YOLO) v3, as a deep learning approach. This method is featured by two distinct properties: (I) an automatic multi-scale feature extractor for nodule feature screening, and (II) a feature-based bounding box generator for nodule localization and diameter estimation. Two independent studies were performed to train and evaluate this CAD method. One study comprised of a computer simulation that utilized computer-based ground truth. In this study, 300 CT scans were simulated by Cardiac-torso (XCAT) digital phantom. Spherical nodules of various sizes (i.e., 3-10 mm in diameter) were randomly implanted within the lung region of the simulated images-the second study utilized human-based ground truth in patients. The CAD method was developed by CT scans sourced from the LIDC-IDRI database. CT scans with slice thickness above 2.5 mm were excluded, leaving 888 CT images for analysis. A 10-fold cross-validation procedure was implemented in both studies to evaluate network hyper-parameterization and generalization. The overall accuracy of the CAD method was evaluated by the detection sensitivities, in response to average false positives (FPs) per image. In the patient study, the detection accuracy was further compared against 9 recently published CAD studies using free-receiver response operating characteristic (FROC) curve analysis. Localization and diameter estimation accuracies were quantified by the mean and standard error between the predicted value and ground truth. RESULTS The average results among the 10 cross-validation folds in both studies demonstrated the CAD method achieved high detection accuracy. The sensitivity was 99.3% (FPs =1), and improved to 100% (FPs =4) in the simulation study. The corresponding sensitivities were 90.0% and 95.4% in the patient study, displaying superiority over several conventional and CNN-based lung nodule CAD methods in the FROC curve analysis. Nodule localization and diameter estimation errors were less than 1 mm in both studies. The developed CAD method achieved high computational efficiency: it yields nodule-specific quantitative values (i.e., number, existence confidence, central coordinates, and diameter) within 0.1 s for 2D CT slice inputs. CONCLUSIONS The reported results suggest that the developed lung pulmonary nodule CAD method possesses high accuracies of nodule localization and diameter estimation. The high computational efficiency enables its potential clinical application in the future.
Collapse
Affiliation(s)
- Chenyang Liu
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, China
| | - Shen-Chiang Hu
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, China
| | - Chunhao Wang
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Kyle Lafata
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Fang-Fang Yin
- Medical Physics Graduate Program, Duke Kunshan University, Kunshan, China
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| |
Collapse
|