1
|
Mathur A, Arya N, Pasupa K, Saha S, Roy Dey S, Saha S. Breast cancer prognosis through the use of multi-modal classifiers: current state of the art and the way forward. Brief Funct Genomics 2024; 23:561-569. [PMID: 38688724 DOI: 10.1093/bfgp/elae015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 03/01/2024] [Accepted: 04/09/2024] [Indexed: 05/02/2024] Open
Abstract
We present a survey of the current state-of-the-art in breast cancer detection and prognosis. We analyze the evolution of Artificial Intelligence-based approaches from using just uni-modal information to multi-modality for detection and how such paradigm shift facilitates the efficacy of detection, consistent with clinical observations. We conclude that interpretable AI-based predictions and ability to handle class imbalance should be considered priority.
Collapse
Affiliation(s)
- Archana Mathur
- Department of Information Science and Engineering, Nitte Meenakshi Institute of Technology, Yelahanka, 560064, Karnataka, India
| | - Nikhilanand Arya
- School of Computer Engineering, Kalinga Institute of Industrial Technology, Deemed to be University, Bhubaneshwar, 751024, Odisha, India
| | - Kitsuchart Pasupa
- School of Information Technology, King Mongkut's Institute of Technology Ladkrabang, 1 Soi Chalongkrung 1, 10520, Bangkok, Thailand
| | - Sriparna Saha
- Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801106, Bihar, India
| | - Sudeepa Roy Dey
- Department of Computer Science and Engineering, PES University, Hosur Road, 560100, Karnataka, India
| | - Snehanshu Saha
- CSIS and APPCAIR, BITS Pilani K.K Birla Goa Campus, Goa, 403726, Goa, India
- Div of AI Research, HappyMonk AI, Bangalore, 560078, Karnataka, India
| |
Collapse
|
2
|
Cai L, Hou K, Zhou S. Intelligent skin lesion segmentation using deformable attention Transformer U-Net with bidirectional attention mechanism in skin cancer images. Skin Res Technol 2024; 30:e13783. [PMID: 39113617 PMCID: PMC11306920 DOI: 10.1111/srt.13783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Accepted: 05/20/2024] [Indexed: 08/11/2024]
Abstract
BACKGROUND In recent years, the increasing prevalence of skin cancers, particularly malignant melanoma, has become a major concern for public health. The development of accurate automated segmentation techniques for skin lesions holds immense potential in alleviating the burden on medical professionals. It is of substantial clinical importance for the early identification and intervention of skin cancer. Nevertheless, the irregular shape, uneven color, and noise interference of the skin lesions have presented significant challenges to the precise segmentation. Therefore, it is crucial to develop a high-precision and intelligent skin lesion segmentation framework for clinical treatment. METHODS A precision-driven segmentation model for skin cancer images is proposed based on the Transformer U-Net, called BiADATU-Net, which integrates the deformable attention Transformer and bidirectional attention blocks into the U-Net. The encoder part utilizes deformable attention Transformer with dual attention block, allowing adaptive learning of global and local features. The decoder part incorporates specifically tailored scSE attention modules within skip connection layers to capture image-specific context information for strong feature fusion. Additionally, deformable convolution is aggregated into two different attention blocks to learn irregular lesion features for high-precision prediction. RESULTS A series of experiments are conducted on four skin cancer image datasets (i.e., ISIC2016, ISIC2017, ISIC2018, and PH2). The findings show that our model exhibits satisfactory segmentation performance, all achieving an accuracy rate of over 96%. CONCLUSION Our experiment results validate the proposed BiADATU-Net achieves competitive performance supremacy compared to some state-of-the-art methods. It is potential and valuable in the field of skin lesion segmentation.
Collapse
Affiliation(s)
- Lili Cai
- School of Biomedical EngineeringGuangzhou Xinhua UniversityGuangzhouChina
| | - Keke Hou
- School of Health SciencesGuangzhou Xinhua UniversityGuangzhouChina
| | - Su Zhou
- School of Biomedical EngineeringGuangzhou Xinhua UniversityGuangzhouChina
| |
Collapse
|
3
|
Bai J, Jin A, Adams M, Yang C, Nabavi S. Unsupervised feature correlation model to predict breast abnormal variation maps in longitudinal mammograms. Comput Med Imaging Graph 2024; 113:102341. [PMID: 38277769 DOI: 10.1016/j.compmedimag.2024.102341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 01/18/2024] [Accepted: 01/18/2024] [Indexed: 01/28/2024]
Abstract
Breast cancer continues to be a significant cause of mortality among women globally. Timely identification and precise diagnosis of breast abnormalities are critical for enhancing patient prognosis. In this study, we focus on improving the early detection and accurate diagnosis of breast abnormalities, which is crucial for improving patient outcomes and reducing the mortality rate of breast cancer. To address the limitations of traditional screening methods, a novel unsupervised feature correlation network was developed to predict maps indicating breast abnormal variations using longitudinal 2D mammograms. The proposed model utilizes the reconstruction process of current year and prior year mammograms to extract tissue from different areas and analyze the differences between them to identify abnormal variations that may indicate the presence of cancer. The model incorporates a feature correlation module, an attention suppression gate, and a breast abnormality detection module, all working together to improve prediction accuracy. The proposed model not only provides breast abnormal variation maps but also distinguishes between normal and cancer mammograms, making it more advanced compared to the state-of-the-art baseline models. The results of the study show that the proposed model outperforms the baseline models in terms of Accuracy, Sensitivity, Specificity, Dice score, and cancer detection rate.
Collapse
Affiliation(s)
- Jun Bai
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT 06269, USA
| | - Annie Jin
- University of Connecticut School of Medicine, 263 Farmington Ave. Farmington, CT 06030, USA
| | - Madison Adams
- University of Connecticut School of Medicine, 263 Farmington Ave. Farmington, CT 06030, USA
| | - Clifford Yang
- University of Connecticut School of Medicine, 263 Farmington Ave. Farmington, CT 06030, USA; Department of Radiology, UConn Health, 263 Farmington Ave. Farmington, CT 06030, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT 06269, USA.
| |
Collapse
|
4
|
Rokhshad R, Salehi SN, Yavari A, Shobeiri P, Esmaeili M, Manila N, Motamedian SR, Mohammad-Rahimi H. Deep learning for diagnosis of head and neck cancers through radiographic data: a systematic review and meta-analysis. Oral Radiol 2024; 40:1-20. [PMID: 37855976 DOI: 10.1007/s11282-023-00715-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 09/23/2023] [Indexed: 10/20/2023]
Abstract
PURPOSE This study aims to review deep learning applications for detecting head and neck cancer (HNC) using magnetic resonance imaging (MRI) and radiographic data. METHODS Through January 2023, a PubMed, Scopus, Embase, Google Scholar, IEEE, and arXiv search were carried out. The inclusion criteria were implementing head and neck medical images (computed tomography (CT), positron emission tomography (PET), MRI, Planar scans, and panoramic X-ray) of human subjects with segmentation, object detection, and classification deep learning models for head and neck cancers. The risk of bias was rated with the quality assessment of diagnostic accuracy studies (QUADAS-2) tool. For the meta-analysis diagnostic odds ratio (DOR) was calculated. Deeks' funnel plot was used to assess publication bias. MIDAS and Metandi packages were used to analyze diagnostic test accuracy in STATA. RESULTS From 1967 studies, 32 were found eligible after the search and screening procedures. According to the QUADAS-2 tool, 7 included studies had a low risk of bias for all domains. According to the results of all included studies, the accuracy varied from 82.6 to 100%. Additionally, specificity ranged from 66.6 to 90.1%, sensitivity from 74 to 99.68%. Fourteen studies that provided sufficient data were included for meta-analysis. The pooled sensitivity was 90% (95% CI 0.820.94), and the pooled specificity was 92% (CI 95% 0.87-0.96). The DORs were 103 (27-251). Publication bias was not detected based on the p-value of 0.75 in the meta-analysis. CONCLUSION With a head and neck screening deep learning model, detectable screening processes can be enhanced with high specificity and sensitivity.
Collapse
Affiliation(s)
- Rata Rokhshad
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany
| | - Seyyede Niloufar Salehi
- Executive Secretary of Research Committee, Board Director of Scientific Society, Dental Faculty, Azad University, Tehran, Iran
| | - Amirmohammad Yavari
- Student Research Committee, School of Dentistry, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Parnian Shobeiri
- School of Medicine, Tehran University of Medical Science, Tehran, Iran
| | - Mahdieh Esmaeili
- Faculty of Dentistry, Tehran Medical Sciences, Islamic Azad University, Tehran, Iran
| | - Nisha Manila
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany
- Department of Diagnostic Sciences, Louisiana State University Health Science Center School of Dentistry, Louisiana, USA
| | - Saeed Reza Motamedian
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany.
- Dentofacial Deformities Research Center, Research Institute of Dental, Sciences & Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Daneshjou Blvd, Tehran, Iran.
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group, AI On Health, Berlin, Germany
| |
Collapse
|
5
|
Wu D, Ni J, Fan W, Jiang Q, Wang L, Sun L, Cai Z. Opportunities and challenges of computer aided diagnosis in new millennium: A bibliometric analysis from 2000 to 2023. Medicine (Baltimore) 2023; 102:e36703. [PMID: 38134105 PMCID: PMC10735127 DOI: 10.1097/md.0000000000036703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/27/2023] [Indexed: 12/24/2023] Open
Abstract
BACKGROUND After entering the new millennium, computer-aided diagnosis (CAD) is rapidly developing as an emerging technology worldwide. Expanding the spectrum of CAD-related diseases is a possible future research trend. Nevertheless, bibliometric studies in this area have not yet been reported. This study aimed to explore the hotspots and frontiers of research on CAD from 2000 to 2023, which may provide a reference for researchers in this field. METHODS In this paper, we use bibliometrics to analyze CAD-related literature in the Web of Science database between 2000 and 2023. The scientometric softwares VOSviewer and CiteSpace were used to visually analyze the countries, institutions, authors, journals, references and keywords involved in the literature. Keywords burst analysis were utilized to further explore the current state and development trends of research on CAD. RESULTS A total of 13,970 publications were included in this study, with a noticeably rising annual publication trend. China and the United States are major contributors to the publication, with the United States being the dominant position in CAD research. The American research institutions, lead by the University of Chicago, are pioneers of CAD. Acharya UR, Zheng B and Chan HP are the most prolific authors. Institute of Electrical and Electronics Engineers Transactions on Medical Imaging focuses on CAD and publishes the most articles. New computer technologies related to CAD are in the forefront of attention. Currently, CAD is used extensively in breast diseases, pulmonary diseases and brain diseases. CONCLUSION Expanding the spectrum of CAD-related diseases is a possible future research trend. How to overcome the lack of large sample datasets and establish a universally accepted standard for the evaluation of CAD system performance are urgent issues for CAD development and validation. In conclusion, this paper provides valuable information on the current state of CAD research and future developments.
Collapse
Affiliation(s)
- Di Wu
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
- Department of Proctology, Bishan Hospital of Traditional Chinese Medicine, Chongqing, China
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Jiachun Ni
- Department of Coloproctology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Wenbin Fan
- Department of Proctology, Bishan Hospital of Traditional Chinese Medicine, Chongqing, China
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Qiong Jiang
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Ling Wang
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Li Sun
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Zengjin Cai
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| |
Collapse
|
6
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
7
|
Garg P, Mohanty A, Ramisetty S, Kulkarni P, Horne D, Pisick E, Salgia R, Singhal SS. Artificial intelligence and allied subsets in early detection and preclusion of gynecological cancers. Biochim Biophys Acta Rev Cancer 2023; 1878:189026. [PMID: 37980945 DOI: 10.1016/j.bbcan.2023.189026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 11/09/2023] [Accepted: 11/14/2023] [Indexed: 11/21/2023]
Abstract
Gynecological cancers including breast, cervical, ovarian, uterine, and vaginal, pose the greatest threat to world health, with early identification being crucial to patient outcomes and survival rates. The application of machine learning (ML) and artificial intelligence (AI) approaches to the study of gynecological cancer has shown potential to revolutionize cancer detection and diagnosis. The current review outlines the significant advancements, obstacles, and prospects brought about by AI and ML technologies in the timely identification and accurate diagnosis of different types of gynecological cancers. The AI-powered technologies can use genomic data to discover genetic alterations and biomarkers linked to a particular form of gynecologic cancer, assisting in the creation of targeted treatments. Furthermore, it has been shown that the potential benefits of AI and ML technologies in gynecologic tumors can greatly increase the accuracy and efficacy of cancer diagnosis, reduce diagnostic delays, and possibly eliminate the need for needless invasive operations. In conclusion, the review focused on the integrative part of AI and ML based tools and techniques in the early detection and exclusion of various cancer types; together with a collaborative coordination between research clinicians, data scientists, and regulatory authorities, which is suggested to realize the full potential of AI and ML in gynecologic cancer care.
Collapse
Affiliation(s)
- Pankaj Garg
- Department of Chemistry, GLA University, Mathura, Uttar Pradesh 281406, India
| | - Atish Mohanty
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Sravani Ramisetty
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Prakash Kulkarni
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - David Horne
- Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Evan Pisick
- Department of Medical Oncology, City of Hope, Chicago, IL 60099, USA
| | - Ravi Salgia
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Sharad S Singhal
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA.
| |
Collapse
|
8
|
He Z, Li X, Chen Y, Lv N, Cai Y. Attention-based dual-path feature fusion network for automatic skin lesion segmentation. BioData Min 2023; 16:28. [PMID: 37807076 PMCID: PMC10561442 DOI: 10.1186/s13040-023-00345-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 09/27/2023] [Indexed: 10/10/2023] Open
Abstract
Automatic segmentation of skin lesions is a critical step in Computer Aided Diagnosis (CAD) of melanoma. However, due to the blurring of the lesion boundary, uneven color distribution, and low image contrast, resulting in poor segmentation result. Aiming at the problem of difficult segmentation of skin lesions, this paper proposes an Attention-based Dual-path Feature Fusion Network (ADFFNet) for automatic skin lesion segmentation. Firstly, in the spatial path, a Boundary Refinement (BR) module is designed for the output of low-level features to filter out irrelevant background information and retain more boundary details of the lesion area. Secondly, in the context path, a Multi-scale Feature Selection (MFS) module is constructed for high-level feature output to capture multi-scale context information and use the attention mechanism to filter out redundant semantic information. Finally, we design a Dual-path Feature Fusion (DFF) module, which uses high-level global attention information to guide the step-by-step fusion of high-level semantic features and low-level detail features, which is beneficial to restore image detail information and further improve the pixel-level segmentation accuracy of skin lesion. In the experiment, the ISIC 2018 and PH2 datasets are employed to evaluate the effectiveness of the proposed method. It achieves a performance of 0.890/ 0.925 and 0.933 /0.954 on the F1-score and SE index, respectively. Comparative analysis with state-of-the-art segmentation methods reveals that the ADFFNet algorithm exhibits superior segmentation performance.
Collapse
Affiliation(s)
- Zhenxiang He
- School of Information Engineering, Southwest University of Science and Technology, Mianyang, China
- Tianfu College of Southwest University of Finance and Economics, Mianyang, China
| | - Xiaoxia Li
- School of Information Engineering, Southwest University of Science and Technology, Mianyang, China
- Robot Technology Used for Special Environment Key Laboratory of Sichuan Province, Mianyang, China
| | - Yuling Chen
- School of Information Engineering, Southwest University of Science and Technology, Mianyang, China
- Robot Technology Used for Special Environment Key Laboratory of Sichuan Province, Mianyang, China
| | - Nianzu Lv
- School of Information Engineering, Southwest University of Science and Technology, Mianyang, China
- Robot Technology Used for Special Environment Key Laboratory of Sichuan Province, Mianyang, China
| | - Yong Cai
- School of manufacturing science and Engineering, Southwest University of Science and Technology, Mianyang, China.
| |
Collapse
|
9
|
Doppala BP, Al Bataineh A, Vamsi B. An Efficient, Lightweight, Tiny 2D-CNN Ensemble Model to Detect Cardiomegaly in Heart CT Images. J Pers Med 2023; 13:1338. [PMID: 37763106 PMCID: PMC10532522 DOI: 10.3390/jpm13091338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 08/26/2023] [Accepted: 08/28/2023] [Indexed: 09/29/2023] Open
Abstract
Cardiomegaly is a significant global health concern, especially in developing nations. Although advanced clinical care is available for newly diagnosed patients, many in resource-limited regions face late diagnoses and consequent increased mortality. This challenge is accentuated by a scarcity of radiography equipment and radiologists. Hence, we propose the development of a computer-aided diagnostic (CAD) system, specifically a lightweight, tiny 2D-CNN ensemble model, to facilitate early detection and, potentially, reduce mortality rates. Deep learning, with its subset of convolutional neural networks (CNN), has shown potential in visual applications, especially in medical image diagnosis. However, traditional deep CNNs often face compatibility issues with object-oriented human factor technology. Our proposed model aims to bridge this gap. Using CT scan images sourced from the Mendeley data center, our tiny 2D-CNN ensemble learning model achieved an accuracy of 96.32%, offering a promising tool for efficient and accurate cardiomegaly detection.
Collapse
Affiliation(s)
| | - Ali Al Bataineh
- Artificial Intelligence Center, Norwich University, Northfield, VT 05663, USA
| | - Bandi Vamsi
- Department of Computer Science—Artificial Intelligence & Data Science, Madanapalle Institute of Technology & Science, Madanapalle 517325, India;
| |
Collapse
|
10
|
Iqbal S, N. Qureshi A, Li J, Mahmood T. On the Analyses of Medical Images Using Traditional Machine Learning Techniques and Convolutional Neural Networks. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2023; 30:3173-3233. [PMID: 37260910 PMCID: PMC10071480 DOI: 10.1007/s11831-023-09899-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/19/2023] [Indexed: 06/02/2023]
Abstract
Convolutional neural network (CNN) has shown dissuasive accomplishment on different areas especially Object Detection, Segmentation, Reconstruction (2D and 3D), Information Retrieval, Medical Image Registration, Multi-lingual translation, Local language Processing, Anomaly Detection on video and Speech Recognition. CNN is a special type of Neural Network, which has compelling and effective learning ability to learn features at several steps during augmentation of the data. Recently, different interesting and inspiring ideas of Deep Learning (DL) such as different activation functions, hyperparameter optimization, regularization, momentum and loss functions has improved the performance, operation and execution of CNN Different internal architecture innovation of CNN and different representational style of CNN has significantly improved the performance. This survey focuses on internal taxonomy of deep learning, different models of vonvolutional neural network, especially depth and width of models and in addition CNN components, applications and current challenges of deep learning.
Collapse
Affiliation(s)
- Saeed Iqbal
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Punjab 54000 Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 Beijing China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing University of Technology, Beijing, 100124 Beijing China
| | - Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586 Kingdom of Saudi Arabia
| |
Collapse
|
11
|
Li J, Chen J, Tang Y, Wang C, Landman BA, Zhou SK. Transforming medical imaging with Transformers? A comparative review of key properties, current progresses, and future perspectives. Med Image Anal 2023; 85:102762. [PMID: 36738650 PMCID: PMC10010286 DOI: 10.1016/j.media.2023.102762] [Citation(s) in RCA: 30] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 01/18/2023] [Accepted: 01/27/2023] [Indexed: 02/01/2023]
Abstract
Transformer, one of the latest technological advances of deep learning, has gained prevalence in natural language processing or computer vision. Since medical imaging bear some resemblance to computer vision, it is natural to inquire about the status quo of Transformers in medical imaging and ask the question: can the Transformer models transform medical imaging? In this paper, we attempt to make a response to the inquiry. After a brief introduction of the fundamentals of Transformers, especially in comparison with convolutional neural networks (CNNs), and highlighting key defining properties that characterize the Transformers, we offer a comprehensive review of the state-of-the-art Transformer-based approaches for medical imaging and exhibit current research progresses made in the areas of medical image segmentation, recognition, detection, registration, reconstruction, enhancement, etc. In particular, what distinguishes our review lies in its organization based on the Transformer's key defining properties, which are mostly derived from comparing the Transformer and CNN, and its type of architecture, which specifies the manner in which the Transformer and CNN are combined, all helping the readers to best understand the rationale behind the reviewed approaches. We conclude with discussions of future perspectives.
Collapse
Affiliation(s)
- Jun Li
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Junyu Chen
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutes, Baltimore, MD, USA
| | - Yucheng Tang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Ce Wang
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China
| | - Bennett A Landman
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - S Kevin Zhou
- Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China; School of Biomedical Engineering & Suzhou Institute for Advanced Research, Center for Medical Imaging, Robotics, and Analytic Computing & Learning (MIRACLE), University of Science and Technology of China, Suzhou 215123, China.
| |
Collapse
|
12
|
Karagoz MA, Akay B, Basturk A, Karaboga D, Nalbantoglu OU. An unsupervised transfer learning model based on convolutional auto encoder for non-alcoholic steatohepatitis activity scoring and fibrosis staging of liver histopathological images. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08252-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
|
13
|
Jonsson T. Micro-CT and deep learning: Modern techniques and applications in insect morphology and neuroscience. FRONTIERS IN INSECT SCIENCE 2023; 3:1016277. [PMID: 38469492 PMCID: PMC10926430 DOI: 10.3389/finsc.2023.1016277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 01/06/2023] [Indexed: 03/13/2024]
Abstract
Advances in modern imaging and computer technologies have led to a steady rise in the use of micro-computed tomography (µCT) in many biological areas. In zoological research, this fast and non-destructive method for producing high-resolution, two- and three-dimensional images is increasingly being used for the functional analysis of the external and internal anatomy of animals. µCT is hereby no longer limited to the analysis of specific biological tissues in a medical or preclinical context but can be combined with a variety of contrast agents to study form and function of all kinds of tissues and species, from mammals and reptiles to fish and microscopic invertebrates. Concurrently, advances in the field of artificial intelligence, especially in deep learning, have revolutionised computer vision and facilitated the automatic, fast and ever more accurate analysis of two- and three-dimensional image datasets. Here, I want to give a brief overview of both micro-computed tomography and deep learning and present their recent applications, especially within the field of insect science. Furthermore, the combination of both approaches to investigate neural tissues and the resulting potential for the analysis of insect sensory systems, from receptor structures via neuronal pathways to the brain, are discussed.
Collapse
Affiliation(s)
- Thorin Jonsson
- Institute of Biology, Karl-Franzens-University Graz, Graz, Austria
| |
Collapse
|
14
|
Zheng D, He X, Jing J. Overview of Artificial Intelligence in Breast Cancer Medical Imaging. J Clin Med 2023; 12:419. [PMID: 36675348 PMCID: PMC9864608 DOI: 10.3390/jcm12020419] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 12/26/2022] [Accepted: 12/30/2022] [Indexed: 01/07/2023] Open
Abstract
The heavy global burden and mortality of breast cancer emphasize the importance of early diagnosis and treatment. Imaging detection is one of the main tools used in clinical practice for screening, diagnosis, and treatment efficacy evaluation, and can visualize changes in tumor size and texture before and after treatment. The overwhelming number of images, which lead to a heavy workload for radiologists and a sluggish reporting period, suggests the need for computer-aid detection techniques and platform. In addition, complex and changeable image features, heterogeneous quality of images, and inconsistent interpretation by different radiologists and medical institutions constitute the primary difficulties in breast cancer screening and imaging diagnosis. The advancement of imaging-based artificial intelligence (AI)-assisted tumor diagnosis is an ideal strategy for improving imaging diagnosis efficient and accuracy. By learning from image data input and constructing algorithm models, AI is able to recognize, segment, and diagnose tumor lesion automatically, showing promising application prospects. Furthermore, the rapid advancement of "omics" promotes a deeper and more comprehensive recognition of the nature of cancer. The fascinating relationship between tumor image and molecular characteristics has attracted attention to the radiomic and radiogenomics, which allow us to perform analysis and detection on the molecular level with no need for invasive operations. In this review, we integrate the current developments in AI-assisted imaging diagnosis and discuss the advances of AI-based breast cancer precise diagnosis from a clinical point of view. Although AI-assisted imaging breast cancer screening and detection is an emerging field and draws much attention, the clinical application of AI in tumor lesion recognition, segmentation, and diagnosis is still limited to research or in limited patients' cohort. Randomized clinical trials based on large and high-quality cohort are lacking. This review aims to describe the progress of the imaging-based AI application in breast cancer screening and diagnosis for clinicians.
Collapse
Affiliation(s)
| | | | - Jing Jing
- Laboratory of Integrative Medicine, Clinical Research Center for Breast, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University and Collaborative Innovation Center, Chengdu 610041, China
| |
Collapse
|
15
|
Mohamed SEL, Mohamed WA, Abdelhalim MB, Ahmed KEL. Advanced Enhancement Techniques for Breast Cancer Classification in Mammographic Images. Open Biomed Eng J 2022. [DOI: 10.2174/18741207-v16-e2209200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Background:
Breast cancer is one of the most significant health problems in the world. Early diagnosis of breast cancer is very important for treatment. Image enhancement techniques have been used to improve the captured images for quick and accurate diagnosis. These techniques include median filtering, edge enhancement, dilation, erosion, and contrast-limited adaptive histogram equalization. Although these techniques have been used in many studies, their results have not reached optimum values based on image properties and the methods used for feature extraction and classification.
Methods:
In this study, enhancement techniques were implemented to guarantee the best image enhancement. They were applied to 319 images collected from the Mammographic Image Analysis Society (MIAS) database. The Gabor filter and local binary pattern were used as feature extraction methods together with support vector machine (SVM), linear discriminant analysis (LDA), and nearest neighbor (KNN) classifiers.
Results:
The experimental work indicates that by merging the features of the Gabor filter and local binary pattern, the results were 97.8%, 100%, and 94.6% for normal/abnormal and 85.1%, 88.7%, and 81.9% for benign/malignant using the SVM, LDA, and KNN classifiers, respectively.
Conclusion:
The best results were obtained by combining the features of the two tested strategies and using LDA as a classifier.
Collapse
|
16
|
Hossain MB, Nishikawa RM, Lee J. Developing breast lesion detection algorithms for digital breast tomosynthesis: Leveraging false positive findings. Med Phys 2022; 49:7596-7608. [PMID: 35916103 PMCID: PMC10156088 DOI: 10.1002/mp.15883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 07/15/2022] [Accepted: 07/17/2022] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND Due to the complex nature of digital breast tomosynthesis (DBT) in imaging techniques, reading times are longer than 2D mammograms. A robust computer-aided diagnosis system in DBT could help radiologists reduce their workload and reading times. PURPOSE The purpose of this study was to develop algorithms for detecting biopsy-proven breast lesions on DBT using multi-depth level convolutional models and leveraging non-biopsied samples. As biopsied positive samples in a lesion dataset are limited, we hypothesized that false positive (FP) findings by detection algorithms from non-biopsied benign lesions could improve detection algorithms by using them as data augmentation. APPROACH We first extracted 2D slices from DBT volumes with biopsy-proven breast lesions (cancer and benign), with non-biopsied benign lesions (actionable), and for controls. Then, to provide lesion continuity along the z-direction, we combined a lesion slice with its immediate adjacent slices to synthesize 2.5-dimensional (2.5D) images of the lesion by assigning them into R, G, and B color channels. We used 224 biopsy-proven lesions from 39 cancer and 62 benign patients from a DBTex challenge dataset of 1000 scans. We included the 2.5D images of immediate neighboring slices from the lesion's center to increase the number of training samples. For lesion detection, we used the YOLOv5 algorithm as our base network. We trained a baseline algorithm (medium-depth level) using biopsied samples to detect actionable FPs in non-biopsied images. Afterward, we fine-tuned the baseline model on the augmented image set (actionable FPs added). For lesion inferencing, we processed the DBT volume slice-by-slice to estimate bounding boxes in each slice, and then combined them by connecting bounding boxes along the depth via volumetric morphological closing. We trained an additional model (large) with deeper-depth levels by repeating the above process. Finally, we developed an ensemble algorithm by combining the medium and large detection models. We used the free-response operating characteristic curve to evaluate our algorithms. We reported mean sensitivity per FPs per DBT volume only for biopsied views and sensitivity at 2-false positives per image (2FPI) for all views. However, due to the limited accessibility to the truth of the challenge validation and test datasets, we used sensitivity at 2FPI for statistical evaluation. RESULTS For the DBTex independent validation set, the medium baseline model achieved a mean sensitivity of 0.627 FPs per DBT volume, and a sensitivity of 0.640 at 2FPI. After adding actionable FP lesions, the model had an improved 2FPI of 0.769 over the baseline (p-value = 0.013). Our ensemble algorithm with multi-depth levels (medium + large) achieved a mean sensitivity of 0.815 FPs per DBT volume and an improved sensitivity at 2FPI of 0.80 over the baseline (p-value < 0.001) on the validation set. Finally, our ensemble model achieved a mean sensitivity of 0.786 FPs per DBT volume and a sensitivity of 0.743 at 2FPI on the DBTex independent test set. CONCLUSIONS Our results show that actionable FP findings hold useful information for lesion detection algorithms, and our ensemble detection model with multi-depth levels improves lesion detection performance.
Collapse
Affiliation(s)
| | | | - Juhun Lee
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
17
|
Wei Y, Su Y. Using machine learning and RNA to enhance the efficacy of anti-tumor immunotherapy. EVOLUTIONARY INTELLIGENCE 2022. [DOI: 10.1007/s12065-022-00781-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
18
|
Baughan N, Douglas L, Giger ML. Past, Present, and Future of Machine Learning and Artificial Intelligence for Breast Cancer Screening. JOURNAL OF BREAST IMAGING 2022; 4:451-459. [PMID: 38416954 DOI: 10.1093/jbi/wbac052] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Indexed: 03/01/2024]
Abstract
Breast cancer screening has evolved substantially over the past few decades because of advancements in new image acquisition systems and novel artificial intelligence (AI) algorithms. This review provides a brief overview of the history, current state, and future of AI in breast cancer screening and diagnosis along with challenges involved in the development of AI systems. Although AI has been developing for interpretation tasks associated with breast cancer screening for decades, its potential to combat the subjective nature and improve the efficiency of human image interpretation is always expanding. The rapid advancement of computational power and deep learning has increased greatly in AI research, with promising performance in detection and classification tasks across imaging modalities. Most AI systems, based on human-engineered or deep learning methods, serve as concurrent or secondary readers, that is, as aids to radiologists for a specific, well-defined task. In the future, AI may be able to perform multiple integrated tasks, making decisions at the level of or surpassing the ability of humans. Artificial intelligence may also serve as a partial primary reader to streamline ancillary tasks, triaging cases or ruling out obvious normal cases. However, before AI is used as an independent, autonomous reader, various challenges need to be addressed, including explainability and interpretability, in addition to repeatability and generalizability, to ensure that AI will provide a significant clinical benefit to breast cancer screening across all populations.
Collapse
Affiliation(s)
- Natalie Baughan
- University of Chicago, Department of Radiology Committee on Medical Physics, Chicago, IL, USA
| | - Lindsay Douglas
- University of Chicago, Department of Radiology Committee on Medical Physics, Chicago, IL, USA
| | - Maryellen L Giger
- University of Chicago, Department of Radiology Committee on Medical Physics, Chicago, IL, USA
| |
Collapse
|
19
|
Malliori A, Pallikarakis N. Breast cancer detection using machine learning in digital mammography and breast tomosynthesis: A systematic review. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00693-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
20
|
Bai J, Jin A, Wang T, Yang C, Nabavi S. Feature fusion siamese network for breast cancer detection comparing current and prior mammograms. Med Phys 2022; 49:3654-3669. [PMID: 35271746 DOI: 10.1002/mp.15598] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 02/08/2022] [Accepted: 03/01/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Automatic detection of very small and non-mass abnormalities from mammogram images has remained challenging. In clinical practice for each patient, radiologists commonly not only screen the mammogram images obtained during the examination, but also compare them with previous mammogram images to make a clinical decision. To design an AI system to mimic radiologists for better cancer detection, in this work we proposed an end-to-end enhanced Siamese convolutional neural network to detect breast cancer using previous year and current year mammogram images. METHODS The proposed Siamese based network uses high resolution mammogram images and fuses features of pairs of previous year and current year mammogram images to predict cancer probabilities. The proposed approach is developed based on the concept of one-shot learning that learns the abnormal differences between current and prior images instead of abnormal objects, and as a result can perform better with small sample size data sets. We developed two variants of the proposed network. In the first model, to fuse the features of current and previous images, we designed an enhanced distance learning network that considers not only the overall distance, but also the pixel-wise distances between the features. In the other model, we concatenated the features of current and previous images to fuse them. RESULTS We compared the performance of the proposed models with those of some baseline models that use current images only (ResNet and VGG) and also use current and prior images (LSTM and vanilla Siamese) in terms of accuracy, sensitivity, precision, F1 score and AUC. Results show that the proposed models outperform the baseline models and the proposed model with the distance learning network performs the best (accuracy: 0.92, sensitivity: 0.93, precision: 0.91, specificity: 0.91, F1: 0.92 and AUC: 0.95). CONCLUSIONS Integrating prior mammogram images improves automatic cancer classification, specially for very small and non-mass abnormalities. For classification models that integrate current and prior mammogram images, using an enhanced and effective distance learning network can advance the performance of the models. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Jun Bai
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| | - Annie Jin
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| | - Tianyu Wang
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| | - Clifford Yang
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| |
Collapse
|
21
|
Koyuncu B, Melek A, Yilmaz D, Tuzer M, Unlu MB. Chemotherapy response prediction with diffuser elapser network. Sci Rep 2022; 12:1628. [PMID: 35102179 PMCID: PMC8803972 DOI: 10.1038/s41598-022-05460-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 11/10/2021] [Indexed: 12/31/2022] Open
Abstract
In solid tumors, elevated fluid pressure and inadequate blood perfusion resulting from unbalanced angiogenesis are the prominent reasons for the ineffective drug delivery inside tumors. To normalize the heterogeneous and tortuous tumor vessel structure, antiangiogenic treatment is an effective approach. Additionally, the combined therapy of antiangiogenic agents and chemotherapy drugs has shown promising effects on enhanced drug delivery. However, the need to find the appropriate scheduling and dosages of the combination therapy is one of the main problems in anticancer therapy. Our study aims to generate a realistic response to the treatment schedule, making it possible for future works to use these patient-specific responses to decide on the optimal starting time and dosages of cytotoxic drug treatment. Our dataset is based on our previous in-silico model with a framework for the tumor microenvironment, consisting of a tumor layer, vasculature network, interstitial fluid pressure, and drug diffusion maps. In this regard, the chemotherapy response prediction problem is discussed in the study, putting forth a proof of concept for deep learning models to capture the tumor growth and drug response behaviors simultaneously. The proposed model utilizes multiple convolutional neural network submodels to predict future tumor microenvironment maps considering the effects of ongoing treatment. Since the model has the task of predicting future tumor microenvironment maps, we use two image quality evaluation metrics, which are structural similarity and peak signal-to-noise ratio, to evaluate model performance. We track tumor cell density values of ground truth and predicted tumor microenvironments. The model predicts tumor microenvironment maps seven days ahead with the average structural similarity score of 0.973 and the average peak signal ratio of 35.41 in the test set. It also predicts tumor cell density at the end day of 7 with the mean absolute percentage error of [Formula: see text].
Collapse
Affiliation(s)
- Batuhan Koyuncu
- Department of Computer Engineering, Bogazici University, Istanbul, 34342, Turkey
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey
| | - Ahmet Melek
- Department of Management, Bogazici University, Istanbul, 34342, Turkey
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey
| | - Defne Yilmaz
- Department of Physics, Bogazici University, Istanbul, 34342, Turkey
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey
| | - Mert Tuzer
- Department of Physics, Bogazici University, Istanbul, 34342, Turkey
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey
| | - Mehmet Burcin Unlu
- Department of Physics, Bogazici University, Istanbul, 34342, Turkey.
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey.
- Hokkaido University, Global Station for Quantum Medical Science and Engineering, Global Institution for Collaborative Research and Education (GI-CoRE), Sapporo, 060-8648, Japan.
| |
Collapse
|
22
|
Zhou K, Li W, Zhao D. Deep learning-based breast region extraction of mammographic images combining pre-processing methods and semantic segmentation supported by Deeplab v3. Technol Health Care 2022; 30:173-190. [PMID: 35124595 PMCID: PMC9028646 DOI: 10.3233/thc-228017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
BACKGROUND Breast cancer has long been one of the major global life-threatening illnesses among women. Surgery and adjuvant therapy, coupled with early detection, could save many lives. This underscores the importance of mammography, a cost-effective and accurate method for early detection. Due to the poor contrast, noise and artifacts which results in difficulty for radiologists to diagnose, Computer-Aided Diagnosis (CAD) systems are hence developed. The extraction of breast region is a fundamental and crucial preparation step for further development of CAD systems. OBJECTIVE The proposed method aims to extract breast region accurately from mammographic images where noise is suppressed, contrast is enhanced and pectoral muscle region is removed. METHODS This paper presents a new deep learning-based breast region extraction method that combines pre-processing methods containing noise suppression using median filter, contrast enhancement using CLAHE and semantic segmentation using Deeplab v3+ model. RESULTS The method is trained and evaluated on mini-MIAS dataset. It has also been evaluated on INbreast dataset. The results outperform those generated by other recent researches and are indicative of the capacity of the model to retain its accuracy and runtime advantage across different databases with different image resolutions. CONCLUSIONS The proposed method shows state-of-the-art performance at extracting breast region from mammographic images. Wide range of evaluation on two commonly used mammography datasets proves the ability and adaptability of the method.
Collapse
Affiliation(s)
- Kuochen Zhou
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Wei Li
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Dazhe Zhao
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| |
Collapse
|
23
|
Suganyadevi S, Seethalakshmi V, Balasamy K. A review on deep learning in medical image analysis. INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL 2022; 11:19-38. [PMID: 34513553 PMCID: PMC8417661 DOI: 10.1007/s13735-021-00218-1] [Citation(s) in RCA: 59] [Impact Index Per Article: 29.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 08/06/2021] [Accepted: 08/09/2021] [Indexed: 05/02/2023]
Abstract
Ongoing improvements in AI, particularly concerning deep learning techniques, are assisting to identify, classify, and quantify patterns in clinical images. Deep learning is the quickest developing field in artificial intelligence and is effectively utilized lately in numerous areas, including medication. A brief outline is given on studies carried out on the region of application: neuro, brain, retinal, pneumonic, computerized pathology, bosom, heart, breast, bone, stomach, and musculoskeletal. For information exploration, knowledge deployment, and knowledge-based prediction, deep learning networks can be successfully applied to big data. In the field of medical image processing methods and analysis, fundamental information and state-of-the-art approaches with deep learning are presented in this paper. The primary goals of this paper are to present research on medical image processing as well as to define and implement the key guidelines that are identified and addressed.
Collapse
Affiliation(s)
- S. Suganyadevi
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, India
| | - V. Seethalakshmi
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, India
| | - K. Balasamy
- Department of IT, Dr. Mahalingam College of Engineering and Technology, Coimbatore, India
| |
Collapse
|
24
|
Morelli R, Clissa L, Amici R, Cerri M, Hitrec T, Luppi M, Rinaldi L, Squarcio F, Zoccoli A. Automating cell counting in fluorescent microscopy through deep learning with c-ResUnet. Sci Rep 2021; 11:22920. [PMID: 34824294 PMCID: PMC8617067 DOI: 10.1038/s41598-021-01929-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 11/03/2021] [Indexed: 02/06/2023] Open
Abstract
Counting cells in fluorescent microscopy is a tedious, time-consuming task that researchers have to accomplish to assess the effects of different experimental conditions on biological structures of interest. Although such objects are generally easy to identify, the process of manually annotating cells is sometimes subject to fatigue errors and suffers from arbitrariness due to the operator’s interpretation of the borderline cases. We propose a Deep Learning approach that exploits a fully-convolutional network in a binary segmentation fashion to localize the objects of interest. Counts are then retrieved as the number of detected items. Specifically, we introduce a Unet-like architecture, cell ResUnet (c-ResUnet), and compare its performance against 3 similar architectures. In addition, we evaluate through ablation studies the impact of two design choices, (i) artifacts oversampling and (ii) weight maps that penalize the errors on cells boundaries increasingly with overcrowding. In summary, the c-ResUnet outperforms the competitors with respect to both detection and counting metrics (respectively, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$F_1$$\end{document}F1 score = 0.81 and MAE = 3.09). Also, the introduction of weight maps contribute to enhance performances, especially in presence of clumping cells, artifacts and confounding biological structures. Posterior qualitative assessment by domain experts corroborates previous results, suggesting human-level performance inasmuch even erroneous predictions seem to fall within the limits of operator interpretation. Finally, we release the pre-trained model and the annotated dataset to foster research in this and related fields.
Collapse
Affiliation(s)
- Roberto Morelli
- National Institute for Nuclear Physics, Bologna, Italy. .,Department of Physics and Astronomy, University of Bologna, Bologna, Italy.
| | - Luca Clissa
- National Institute for Nuclear Physics, Bologna, Italy.,Department of Physics and Astronomy, University of Bologna, Bologna, Italy
| | - Roberto Amici
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Matteo Cerri
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Timna Hitrec
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Marco Luppi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Lorenzo Rinaldi
- National Institute for Nuclear Physics, Bologna, Italy.,Department of Physics and Astronomy, University of Bologna, Bologna, Italy
| | - Fabio Squarcio
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Antonio Zoccoli
- National Institute for Nuclear Physics, Bologna, Italy.,Department of Physics and Astronomy, University of Bologna, Bologna, Italy
| |
Collapse
|
25
|
Zhang YN, XIA KR, LI CY, WEI BL, Zhang B. Review of Breast Cancer Pathologigcal Image Processing. BIOMED RESEARCH INTERNATIONAL 2021; 2021:1994764. [PMID: 34595234 PMCID: PMC8478535 DOI: 10.1155/2021/1994764] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 08/24/2021] [Indexed: 11/17/2022]
Abstract
Breast cancer is one of the most common malignancies. Pathological image processing of breast has become an important means for early diagnosis of breast cancer. Using medical image processing to assist doctors to detect potential breast cancer as early as possible has always been a hot topic in the field of medical image diagnosis. In this paper, a breast cancer recognition method based on image processing is systematically expounded from four aspects: breast cancer detection, image segmentation, image registration, and image fusion. The achievements and application scope of supervised learning, unsupervised learning, deep learning, CNN, and so on in breast cancer examination are expounded. The prospect of unsupervised learning and transfer learning for breast cancer diagnosis is prospected. Finally, the privacy protection of breast cancer patients is put forward.
Collapse
Affiliation(s)
- Ya-nan Zhang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
- HRG International Institute (Hefei) of Research and Innovation, Hefei 230000, China
| | - Ke-rui XIA
- HRG International Institute (Hefei) of Research and Innovation, Hefei 230000, China
| | - Chang-yi LI
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
| | - Ben-li WEI
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
| | - Bing Zhang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
| |
Collapse
|
26
|
Lan K, Li G, Jie Y, Tang R, Liu L, Fong S. Convolutional neural network with group theory and random selection particle swarm optimizer for enhancing cancer image classification. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:5573-5591. [PMID: 34517501 DOI: 10.3934/mbe.2021281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
As an epitome of deep learning, convolutional neural network (CNN) has shown its advantages in solving many real-world problems. Successful CNN applications on medical prognosis and diagnosis have been achieved in recent years. Their common goal is to recognize the insights from the subtle details from medical images by building a suitable CNN model with maximum accuracy and minimum error. The CNN performance is extremely sensitive to the parameter tuning for any given network structure. To approach this concern, a novel self-tuning CNN model is proposed with a significant characteristic of having a metaheuristic-based optimizer. The most optimal set of parameters is often found via our proposed method, namely group theory and random selection-based particle swarm optimization (GTRS-PSO). The insights of symmetric essentials of model structure and parameter correlation are extracted, followed by the hierarchical partitioning of parameter space, and four operators on those partitions are designed for moving neighborhoods and formulating the swarm topology accordingly. The parameters are updated by a random selection strategy at each interval of partitions during the search process. Preliminary experiments over two radiology image datasets: breast cancer and lung cancer, are conducted for a comprehensive comparison of GTRS-PSO versus other optimization algorithms. The results show that CNN with GTRS-PSO optimizer can achieve the best performance for cancer image classifications, especially when there are symmetric components inside the data properties and model structures.
Collapse
Affiliation(s)
- Kun Lan
- Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macau 999078, China
- DACC Laboratory, Zhuhai Institutes of Advanced Technology of the Chinese Academy of Sciences, Zhuhai 519080, China
| | - Gloria Li
- Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macau 999078, China
- DACC Laboratory, Zhuhai Institutes of Advanced Technology of the Chinese Academy of Sciences, Zhuhai 519080, China
| | - Yang Jie
- Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macau 999078, China
- DACC Laboratory, Zhuhai Institutes of Advanced Technology of the Chinese Academy of Sciences, Zhuhai 519080, China
| | - Rui Tang
- Department of Management and Science and Information System, Faculty of Management and Economics, Kunming University of Science and Technology, Kunming 650093, China
| | - Liansheng Liu
- Department of Medical Imaging, First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou 510405, China
| | - Simon Fong
- Department of Computer and Information Science, Faculty of Science and Technology, University of Macau, Macau 999078, China
- DACC Laboratory, Zhuhai Institutes of Advanced Technology of the Chinese Academy of Sciences, Zhuhai 519080, China
| |
Collapse
|
27
|
Samala RK, Chan HP, Hadjiiski L, Helvie MA. Risks of feature leakage and sample size dependencies in deep feature extraction for breast mass classification. Med Phys 2021; 48:2827-2837. [PMID: 33368376 PMCID: PMC8601676 DOI: 10.1002/mp.14678] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 11/27/2020] [Accepted: 12/06/2020] [Indexed: 12/20/2022] Open
Abstract
PURPOSE Transfer learning is commonly used in deep learning for medical imaging to alleviate the problem of limited available data. In this work, we studied the risk of feature leakage and its dependence on sample size when using pretrained deep convolutional neural network (DCNN) as feature extractor for classification breast masses in mammography. METHODS Feature leakage occurs when the training set is used for feature selection and classifier modeling while the cost function is guided by the validation performance or informed by the test performance. The high-dimensional feature space extracted from pretrained DCNN suffers from the curse of dimensionality; feature subsets that can provide excessively optimistic performance can be found for the validation set or test set if the latter is allowed for unlimited reuse during algorithm development. We designed a simulation study to examine feature leakage when using DCNN as feature extractor for mass classification in mammography. Four thousand five hundred and seventy-seven unique mass lesions were partitioned by patient into three sets: 3222 for training, 508 for validation, and 847 for independent testing. Three pretrained DCNNs, AlexNet, GoogLeNet, and VGG16, were first compared using a training set in fourfold cross validation and one was selected as the feature extractor. To assess generalization errors, the independent test set was sequestered as truly unseen cases. A training set of a range of sizes from 10% to 75% was simulated by random drawing from the available training set in addition to 100% of the training set. Three commonly used feature classifiers, the linear discriminant, the support vector machine, and the random forest were evaluated. A sequential feature selection method was used to find feature subsets that could achieve high classification performance in terms of the area under the receiver operating characteristic curve (AUC) in the validation set. The extent of feature leakage and the impact of training set size were analyzed by comparison to the performance in the unseen test set. RESULTS All three classifiers showed large generalization error between the validation set and the independent sequestered test set at all sample sizes. The generalization error decreased as the sample size increased. At 100% of the sample size, one classifier achieved an AUC as high as 0.91 on the validation set while the corresponding performance on the unseen test set only reached an AUC of 0.72. CONCLUSIONS Our results demonstrate that large generalization errors can occur in AI tools due to feature leakage. Without evaluation on unseen test cases, optimistically biased performance may be reported inadvertently, and can lead to unrealistic expectations and reduce confidence for clinical implementation.
Collapse
Affiliation(s)
- Ravi K Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| | | | - Mark A Helvie
- Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
28
|
Abstract
Because of the uncertainty of weather and the complexity of atmospheric movement, extreme weather has always been an important and difficult meteorological problem. Extreme weather events can be called high-impact weather, the ‘extreme’ here means that the probability of occurrence is very small. Deep learning can automatically learn and train from a large number of sample data to obtain excellent feature expression, which effectively improves the performance of various machine learning tasks and is widely used in computer vision, natural language processing, and other fields. Based on the introduction of deep learning, this article makes a preliminary summary of the existing extreme weather prediction methods. These include the ability to use recurrent neural networks to predict weather phenomena and convolutional neural networks to predict the weather. They can automatically extract image features of extreme weather phenomena and predict the possibility of extreme weather somewhere by using a deep learning framework.
Collapse
|
29
|
Díaz O, Rodríguez-Ruiz A, Gubern-Mérida A, Martí R, Chevalier M. Are artificial intelligence systems useful in breast cancer screening programmes? RADIOLOGIA 2021. [DOI: 10.1016/j.rxeng.2020.11.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
30
|
Puttagunta M, Ravi S. Medical image analysis based on deep learning approach. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:24365-24398. [PMID: 33841033 PMCID: PMC8023554 DOI: 10.1007/s11042-021-10707-4] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 11/28/2020] [Accepted: 02/10/2021] [Indexed: 05/05/2023]
Abstract
Medical imaging plays a significant role in different clinical applications such as medical procedures used for early detection, monitoring, diagnosis, and treatment evaluation of various medical conditions. Basicsof the principles and implementations of artificial neural networks and deep learning are essential for understanding medical image analysis in computer vision. Deep Learning Approach (DLA) in medical image analysis emerges as a fast-growing research field. DLA has been widely used in medical imaging to detect the presence or absence of the disease. This paper presents the development of artificial neural networks, comprehensive analysis of DLA, which delivers promising medical imaging applications. Most of the DLA implementations concentrate on the X-ray images, computerized tomography, mammography images, and digital histopathology images. It provides a systematic review of the articles for classification, detection, and segmentation of medical images based on DLA. This review guides the researchers to think of appropriate changes in medical image analysis based on DLA.
Collapse
Affiliation(s)
- Muralikrishna Puttagunta
- Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry, India
| | - S. Ravi
- Department of Computer Science, School of Engineering and Technology, Pondicherry University, Pondicherry, India
| |
Collapse
|
31
|
Sobhani F, Robinson R, Hamidinekoo A, Roxanis I, Somaiah N, Yuan Y. Artificial intelligence and digital pathology: Opportunities and implications for immuno-oncology. Biochim Biophys Acta Rev Cancer 2021; 1875:188520. [PMID: 33561505 PMCID: PMC9062980 DOI: 10.1016/j.bbcan.2021.188520] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Revised: 01/04/2021] [Accepted: 01/30/2021] [Indexed: 02/08/2023]
Abstract
The field of immuno-oncology has expanded rapidly over the past decade, but key questions remain. How does tumour-immune interaction regulate disease progression? How can we prospectively identify patients who will benefit from immunotherapy? Identifying measurable features of the tumour immune-microenvironment which have prognostic or predictive value will be key to making meaningful gains in these areas. Recent developments in deep learning enable big-data analysis of pathological samples. Digital approaches allow data to be acquired, integrated and analysed far beyond what is possible with conventional techniques, and to do so efficiently and at scale. This has the potential to reshape what can be achieved in terms of volume, precision and reliability of output, enabling data for large cohorts to be summarised and compared. This review examines applications of artificial intelligence (AI) to important questions in immuno-oncology (IO). We discuss general considerations that need to be taken into account before AI can be applied in any clinical setting. We describe AI methods that have been applied to the field of IO to date and present several examples of their use.
Collapse
Affiliation(s)
- Faranak Sobhani
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| | - Ruth Robinson
- Division of Radiotherapy and Imaging, Institute of Cancer Research, The Royal Marsden NHS Foundation Trust, London, UK.
| | - Azam Hamidinekoo
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| | - Ioannis Roxanis
- The Breast Cancer Now Toby Robins Research Centre, The Institute of Cancer Research, London, UK.
| | - Navita Somaiah
- Division of Radiotherapy and Imaging, Institute of Cancer Research, The Royal Marsden NHS Foundation Trust, London, UK.
| | - Yinyin Yuan
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| |
Collapse
|
32
|
Chan HP, Hadjiiski LM, Samala RK. Computer-aided diagnosis in the era of deep learning. Med Phys 2021; 47:e218-e227. [PMID: 32418340 DOI: 10.1002/mp.13764] [Citation(s) in RCA: 99] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Revised: 05/13/2019] [Accepted: 05/13/2019] [Indexed: 12/15/2022] Open
Abstract
Computer-aided diagnosis (CAD) has been a major field of research for the past few decades. CAD uses machine learning methods to analyze imaging and/or nonimaging patient data and makes assessment of the patient's condition, which can then be used to assist clinicians in their decision-making process. The recent success of the deep learning technology in machine learning spurs new research and development efforts to improve CAD performance and to develop CAD for many other complex clinical tasks. In this paper, we discuss the potential and challenges in developing CAD tools using deep learning technology or artificial intelligence (AI) in general, the pitfalls and lessons learned from CAD in screening mammography and considerations needed for future implementation of CAD or AI in clinical use. It is hoped that the past experiences and the deep learning technology will lead to successful advancement and lasting growth in this new era of CAD, thereby enabling CAD to deliver intelligent aids to improve health care.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109-5842, USA
| | - Lubomir M Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109-5842, USA
| | - Ravi K Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109-5842, USA
| |
Collapse
|
33
|
Díaz O, Rodríguez-Ruiz A, Gubern-Mérida A, Martí R, Chevalier M. Are artificial intelligence systems useful in breast cancer screening programs? RADIOLOGIA 2021; 63:236-244. [PMID: 33461750 DOI: 10.1016/j.rx.2020.11.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Revised: 11/03/2020] [Accepted: 11/16/2020] [Indexed: 12/24/2022]
Abstract
Population-based breast cancer screening programs are efficacious in reducing the mortality due to breast cancer. These programs use mammography to screen the women who are invited to participate. Digital mammography makes it possible to develop computer-assisted diagnosis (CAD) systems that promise to reduce the workload of radiologists participating in screening programs. However, various studies have shown that CAD results in a high rate of false positive diagnoses. Systems based on artificial intelligence are being more widely implemented, and studies have shown that these systems have better diagnostic performance than traditional CAD systems. This article explains the fundamentals of artificial intelligence systems and an overview of possible applications of these systems within the framework of breast cancer screening programs.
Collapse
Affiliation(s)
- O Díaz
- Departamento de Matemáticas e Informática, Universidad de Barcelona, Barcelona, España
| | | | | | - R Martí
- Instituto de Visión Artificial y Robótica (VICOROB), Universitat de Girona, Girona, España
| | - M Chevalier
- Física Médica, Departamento de Radiología, Rehabilitación y Fisioterapia, Universidad Complutense de Madrid, Madrid, España; Instituto de Investigación Sanitaria, Hospital Clínico San Carlos, Madrid, España.
| |
Collapse
|
34
|
ElOuassif B, Idri A, Hosni M, Abran A. Classification techniques in breast cancer diagnosis: A systematic literature review. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2021. [DOI: 10.1080/21681163.2020.1811159] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Bouchra ElOuassif
- Department of Web and Mobile Engineering, Software Project Management Research Team, ENSIAS, Mohammed V University, Rabat, Morocco
| | - Ali Idri
- Department of Web and Mobile Engineering, Software Project Management Research Team, ENSIAS, Mohammed V University, Rabat, Morocco
| | - Mohamed Hosni
- Department of Web and Mobile Engineering, Software Project Management Research Team, ENSIAS, Mohammed V University, Rabat, Morocco
| | - Alain Abran
- Department of Software Engineering and Information Technology, Ecole De Technologie Supérieure, –university of Québec, Montreal, Canada
| |
Collapse
|
35
|
Xiao D, Li H, Sun X. Coal Classification Method Based on Improved Local Receptive Field-Based Extreme Learning Machine Algorithm and Visible-Infrared Spectroscopy. ACS OMEGA 2020; 5:25772-25783. [PMID: 33073102 PMCID: PMC7557221 DOI: 10.1021/acsomega.0c03069] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 09/02/2020] [Indexed: 06/02/2023]
Abstract
In the process of using coal, if the type of coal cannot be accurately determined, it will have a significant impact on production efficiency, environmental pollution, and economic loss. At present, the traditional classification method of coal mainly relies on technician's experience. This requires a lot of manpower and time, and it is difficult to automate. This paper mainly studies the application of visible-infrared spectroscopy and machine learning methods in coal mine identification and analysis to provide guidance for coal mining and production. This paper explores a fast and high-precision method for coal identification. In this paper, for the characteristics of high dimensionality, strong correlation, and large redundancy of spectral data, the local receptive field (LRF) is used to extract the advanced features of spectral data, which is combined with the extreme learning machine (ELM). We improved the coyote optimization algorithm (COA). The improved coyote optimization algorithm (I-COA) and local receptive field-based extreme learning machine (ELM-LRF) are used to optimize the structure and training parameters of the extreme learning machine network. The experimental results show that the coal classification model based on the network and visible-infrared spectroscopy can effectively identify the coal types through the spectral data. Compared with convolutional neural networks (CNN algorithm) and principal component analysis (PCA algorithm), LRF can extract the spectral characteristics of coal more effectively.
Collapse
Affiliation(s)
- Dong Xiao
- College
of Information Science and Engineering, Northeastern University, Shenyang 110819, China
- Liaoning
Key Laboratory of Intelligent Diagnosis and Safety for Metallurgical
Industry, Northeastern University, Shenyang 110819, China
| | - Hongzong Li
- College
of Information Science and Engineering, Northeastern University, Shenyang 110819, China
| | - Xiaoyu Sun
- College
of Resources and Civil Engineering, Northeastern
University, Shenyang 110819, China
| |
Collapse
|
36
|
Bruno A, Ardizzone E, Vitabile S, Midiri M. A Novel Solution Based on Scale Invariant Feature Transform Descriptors and Deep Learning for the Detection of Suspicious Regions in Mammogram Images. JOURNAL OF MEDICAL SIGNALS & SENSORS 2020; 10:158-173. [PMID: 33062608 PMCID: PMC7528986 DOI: 10.4103/jmss.jmss_31_19] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2019] [Revised: 10/01/2019] [Accepted: 05/06/2020] [Indexed: 12/24/2022]
Abstract
BACKGROUND Deep learning methods have become popular for their high-performance rate in the classification and detection of events in computer vision tasks. Transfer learning paradigm is widely adopted to apply pretrained convolutional neural network (CNN) on medical domains overcoming the problem of the scarcity of public datasets. Some investigations to assess transfer learning knowledge inference abilities in the context of mammogram screening and possible combinations with unsupervised techniques are in progress. METHODS We propose a novel technique for the detection of suspicious regions in mammograms that consist of the combination of two approaches based on scale invariant feature transform (SIFT) keypoints and transfer learning with pretrained CNNs such as PyramidNet and AlexNet fine-tuned on digital mammograms generated by different mammography devices. Preprocessing, feature extraction, and selection steps characterize the SIFT-based method, while the deep learning network validates the candidate suspicious regions detected by the SIFT method. RESULTS The experiments conducted on both mini-MIAS dataset and our new public dataset Suspicious Region Detection on Mammogram from PP (SuReMaPP) of 384 digital mammograms exhibit high performances compared to several state-of-the-art methods. Our solution reaches 98% of sensitivity and 90% of specificity on SuReMaPP and 94% of sensitivity and 91% of specificity on mini-MIAS. CONCLUSIONS The experimental sessions conducted so far prompt us to further investigate the powerfulness of transfer learning over different CNNs and possible combinations with unsupervised techniques. Transfer learning performances' accuracy may decrease when the training and testing images come out from mammography devices with different properties.
Collapse
Affiliation(s)
- Alessandro Bruno
- Faculty of Media and Communication, Department - NCCA (National Centre for Computer Animation) at Bournemouth University, Poole, Dorset, United Kingdom
| | | | - Salvatore Vitabile
- Department of Biomedicine, Neuroscience and Advanced Diagnostic at Palermo University, Palermo, Italy
| | - Massimo Midiri
- Department of Biomedicine, Neuroscience and Advanced Diagnostic at Palermo University, Palermo, Italy
| |
Collapse
|
37
|
Trusculescu AA, Manolescu D, Tudorache E, Oancea C. Deep learning in interstitial lung disease-how long until daily practice. Eur Radiol 2020; 30:6285-6292. [PMID: 32537728 PMCID: PMC7554005 DOI: 10.1007/s00330-020-06986-4] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2020] [Revised: 03/28/2020] [Accepted: 05/27/2020] [Indexed: 12/19/2022]
Abstract
Interstitial lung diseases are a diverse group of disorders that involve inflammation and fibrosis of interstitium, with clinical, radiological, and pathological overlapping features. These are an important cause of morbidity and mortality among lung diseases. This review describes computer-aided diagnosis systems centered on deep learning approaches that improve the diagnostic of interstitial lung diseases. We highlighted the challenges and the implementation of important daily practice, especially in the early diagnosis of idiopathic pulmonary fibrosis (IPF). Developing a convolutional neuronal network (CNN) that could be deployed on any computer station and be accessible to non-academic centers is the next frontier that needs to be crossed. In the future, early diagnosis of IPF should be possible. CNN might not only spare the human resources but also will reduce the costs spent on all the social and healthcare aspects of this deadly disease. Key Points • Deep learning algorithms are used in pattern recognition of different interstitial lung diseases. • High-resolution computed tomography plays a central role in the diagnosis and in the management of all interstitial lung diseases, especially fibrotic lung disease. • Developing an accessible algorithm that could be deployed on any computer station and be used in non-academic centers is the next frontier in the early diagnosis of idiopathic pulmonary fibrosis.
Collapse
Affiliation(s)
- Ana Adriana Trusculescu
- Department of Pulmonology, University of Medicine and Pharmacy "Victor Babes", Timisoara, Romania
| | - Diana Manolescu
- Department of Radiology, University of Medicine and Pharmacy "Victor Babes", Eftimie Murgu Square, Number 2, Timisoara, Romania.
| | - Emanuela Tudorache
- Department of Pulmonology, University of Medicine and Pharmacy "Victor Babes", Timisoara, Romania
| | - Cristian Oancea
- Department of Pulmonology, University of Medicine and Pharmacy "Victor Babes", Timisoara, Romania
| |
Collapse
|
38
|
Samala RK, Chan HP, Hadjiiski LM, Helvie MA, Richter CD. Generalization error analysis for deep convolutional neural network with transfer learning in breast cancer diagnosis. Phys Med Biol 2020; 65:105002. [PMID: 32208369 DOI: 10.1088/1361-6560/ab82e8] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Deep convolutional neural network (DCNN), now popularly called artificial intelligence (AI), has shown the potential to improve over previous computer-assisted tools in medical imaging developed in the past decades. A DCNN has millions of free parameters that need to be trained, but the training sample set is limited in size for most medical imaging tasks so that transfer learning is typically used. Automatic data mining may be an efficient way to enlarge the collected data set but the data can be noisy such as incorrect labels or even a wrong type of image. In this work we studied the generalization error of DCNN with transfer learning in medical imaging for the task of classifying malignant and benign masses on mammograms. With a finite available data set, we simulated a training set containing corrupted data or noisy labels. The balance between learning and memorization of the DCNN was manipulated by varying the proportion of corrupted data in the training set. The generalization error of DCNN was analyzed by the area under the receiver operating characteristic curve for the training and test sets and the weight changes after transfer learning. The study demonstrates that the transfer learning strategy of DCNN for such tasks needs to be designed properly, taking into consideration the constraints of the available training set having limited size and quality for the classification task at hand, to minimize memorization and improve generalizability.
Collapse
Affiliation(s)
- Ravi K Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109-5842, United States of America
| | | | | | | | | |
Collapse
|
39
|
Chan HP, Samala RK, Hadjiiski LM. CAD and AI for breast cancer-recent development and challenges. Br J Radiol 2020; 93:20190580. [PMID: 31742424 PMCID: PMC7362917 DOI: 10.1259/bjr.20190580] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 11/13/2019] [Accepted: 11/17/2019] [Indexed: 12/15/2022] Open
Abstract
Computer-aided diagnosis (CAD) has been a popular area of research and development in the past few decades. In CAD, machine learning methods and multidisciplinary knowledge and techniques are used to analyze the patient information and the results can be used to assist clinicians in their decision making process. CAD may analyze imaging information alone or in combination with other clinical data. It may provide the analyzed information directly to the clinician or correlate the analyzed results with the likelihood of certain diseases based on statistical modeling of the past cases in the population. CAD systems can be developed to provide decision support for many applications in the patient care processes, such as lesion detection, characterization, cancer staging, treatment planning and response assessment, recurrence and prognosis prediction. The new state-of-the-art machine learning technique, known as deep learning (DL), has revolutionized speech and text recognition as well as computer vision. The potential of major breakthrough by DL in medical image analysis and other CAD applications for patient care has brought about unprecedented excitement of applying CAD, or artificial intelligence (AI), to medicine in general and to radiology in particular. In this paper, we will provide an overview of the recent developments of CAD using DL in breast imaging and discuss some challenges and practical issues that may impact the advancement of artificial intelligence and its integration into clinical workflow.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | - Ravi K. Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, United States
| | | |
Collapse
|
40
|
Cho SI, Lee D, Jo SJ. Application of Basic Epidemiologic Principles and Electronic Health Records in a Deep Learning Prediction Model. JAMA Dermatol 2020; 156:473-474. [DOI: 10.1001/jamadermatol.2019.4922] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Affiliation(s)
- Soo Ick Cho
- Department of Dermatology, Seoul National University College of Medicine, Seoul, South Korea
| | - Dongheon Lee
- Interdisciplinary Program for Biomedical Engineering, Seoul National University, Seoul, South Korea
| | - Seong Jin Jo
- Department of Dermatology, Seoul National University College of Medicine, Seoul, South Korea
| |
Collapse
|
41
|
Ramadan SZ. Methods Used in Computer-Aided Diagnosis for Breast Cancer Detection Using Mammograms: A Review. JOURNAL OF HEALTHCARE ENGINEERING 2020; 2020:9162464. [PMID: 32300474 PMCID: PMC7091549 DOI: 10.1155/2020/9162464] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 12/25/2019] [Accepted: 02/13/2020] [Indexed: 12/28/2022]
Abstract
According to the American Cancer Society's forecasts for 2019, there will be about 268,600 new cases in the United States with invasive breast cancer in women, about 62,930 new noninvasive cases, and about 41,760 death cases from breast cancer. As a result, there is a high demand for breast imaging specialists as indicated in a recent report for the Institute of Medicine and National Research Council. One way to meet this demand is through developing Computer-Aided Diagnosis (CAD) systems for breast cancer detection and diagnosis using mammograms. This study aims to review recent advancements and developments in CAD systems for breast cancer detection and diagnosis using mammograms and to give an overview of the methods used in its steps starting from preprocessing and enhancement step and ending in classification step. The current level of performance for the CAD systems is encouraging but not enough to make CAD systems standalone detection and diagnose clinical systems. Unless the performance of CAD systems enhanced dramatically from its current level by enhancing the existing methods, exploiting new promising methods in pattern recognition like data augmentation in deep learning and exploiting the advances in computational power of computers, CAD systems will continue to be a second opinion clinical procedure.
Collapse
Affiliation(s)
- Saleem Z. Ramadan
- Department of Industrial Engineering, German Jordanian University, Mushaqar 11180, Amman, Jordan
| |
Collapse
|
42
|
Lan K, Liu L, Li T, Chen Y, Fong S, Marques JAL, Wong RK, Tang R. Multi-view convolutional neural network with leader and long-tail particle swarm optimizer for enhancing heart disease and breast cancer detection. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-04769-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
43
|
El Naqa I, Haider MA, Giger ML, Ten Haken RK. Artificial Intelligence: reshaping the practice of radiological sciences in the 21st century. Br J Radiol 2020; 93:20190855. [PMID: 31965813 PMCID: PMC7055429 DOI: 10.1259/bjr.20190855] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 01/12/2020] [Accepted: 01/13/2020] [Indexed: 12/15/2022] Open
Abstract
Advances in computing hardware and software platforms have led to the recent resurgence in artificial intelligence (AI) touching almost every aspect of our daily lives by its capability for automating complex tasks or providing superior predictive analytics. AI applications are currently spanning many diverse fields from economics to entertainment, to manufacturing, as well as medicine. Since modern AI's inception decades ago, practitioners in radiological sciences have been pioneering its development and implementation in medicine, particularly in areas related to diagnostic imaging and therapy. In this anniversary article, we embark on a journey to reflect on the learned lessons from past AI's chequered history. We further summarize the current status of AI in radiological sciences, highlighting, with examples, its impressive achievements and effect on re-shaping the practice of medical imaging and radiotherapy in the areas of computer-aided detection, diagnosis, prognosis, and decision support. Moving beyond the commercial hype of AI into reality, we discuss the current challenges to overcome, for AI to achieve its promised hope of providing better precision healthcare for each patient while reducing cost burden on their families and the society at large.
Collapse
Affiliation(s)
- Issam El Naqa
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA
| | - Masoom A Haider
- Department of Medical Imaging and Lunenfeld-Tanenbaum Research Institute, University of Toronto, Toronto, ON, Canada
| | | | - Randall K Ten Haken
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
44
|
Tiwari S. A Blur Classification Approach Using Deep Convolution Neural Network. INTERNATIONAL JOURNAL OF INFORMATION SYSTEM MODELING AND DESIGN 2020. [DOI: 10.4018/ijismd.2020010106] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Computer vision-based gesture identification is designed to recognize human actions with the help of images. During the process of gesture image acquisition, images suffer various degradations. The method of recovering these degraded images is called restoration. In the case of blind restoration of such a degraded image where blur information is unavailable, it is essential to determine the exact blur type. This article presents a convolution neural network model for blur classification which categories a blur found in a hand gesture image into one of the four blur categories: motion, defocus, Gaussian, and box blur. The simulation results demonstrate the improved preciseness of the CNN model when compared to the MLP model.
Collapse
Affiliation(s)
- Shamik Tiwari
- School of Computer Science, University of Petroleum and Energy Studies, Dehradun, India
| |
Collapse
|
45
|
Gao F, Wu T, Chu X, Yoon H, Xu Y, Patel B. Deep Residual Inception Encoder–Decoder Network for Medical Imaging Synthesis. IEEE J Biomed Health Inform 2020; 24:39-49. [DOI: 10.1109/jbhi.2019.2912659] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
46
|
Chan HP, Samala RK, Hadjiiski LM, Zhou C. Deep Learning in Medical Image Analysis. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020; 1213:3-21. [PMID: 32030660 PMCID: PMC7442218 DOI: 10.1007/978-3-030-33128-3_1] [Citation(s) in RCA: 255] [Impact Index Per Article: 63.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Deep learning is the state-of-the-art machine learning approach. The success of deep learning in many pattern recognition applications has brought excitement and high expectations that deep learning, or artificial intelligence (AI), can bring revolutionary changes in health care. Early studies of deep learning applied to lesion detection or classification have reported superior performance compared to those by conventional techniques or even better than radiologists in some tasks. The potential of applying deep-learning-based medical image analysis to computer-aided diagnosis (CAD), thus providing decision support to clinicians and improving the accuracy and efficiency of various diagnostic and treatment processes, has spurred new research and development efforts in CAD. Despite the optimism in this new era of machine learning, the development and implementation of CAD or AI tools in clinical practice face many challenges. In this chapter, we will discuss some of these issues and efforts needed to develop robust deep-learning-based CAD tools and integrate these tools into the clinical workflow, thereby advancing towards the goal of providing reliable intelligent aids for patient care.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, USA.
| | - Ravi K Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| | | | - Chuan Zhou
- Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
47
|
Ma J, Song Y, Tian X, Hua Y, Zhang R, Wu J. Survey on deep learning for pulmonary medical imaging. Front Med 2019; 14:450-469. [PMID: 31840200 DOI: 10.1007/s11684-019-0726-4] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Accepted: 10/12/2019] [Indexed: 12/27/2022]
Abstract
As a promising method in artificial intelligence, deep learning has been proven successful in several domains ranging from acoustics and images to natural language processing. With medical imaging becoming an important part of disease screening and diagnosis, deep learning-based approaches have emerged as powerful techniques in medical image areas. In this process, feature representations are learned directly and automatically from data, leading to remarkable breakthroughs in the medical field. Deep learning has been widely applied in medical imaging for improved image analysis. This paper reviews the major deep learning techniques in this time of rapid evolution and summarizes some of its key contributions and state-of-the-art outcomes. The topics include classification, detection, and segmentation tasks on medical image analysis with respect to pulmonary medical images, datasets, and benchmarks. A comprehensive overview of these methods implemented on various lung diseases consisting of pulmonary nodule diseases, pulmonary embolism, pneumonia, and interstitial lung disease is also provided. Lastly, the application of deep learning techniques to the medical image and an analysis of their future challenges and potential directions are discussed.
Collapse
Affiliation(s)
| | - Yang Song
- Dalian Municipal Central Hospital Affiliated to Dalian Medical University, Dalian, 116033, China
| | - Xi Tian
- InferVision, Beijing, 100020, China
| | | | | | - Jianlin Wu
- Affiliated Zhongshan Hospital of Dalian University, Dalian, 116001, China.
| |
Collapse
|
48
|
Tustison NJ, Avants BB, Gee JC. Learning image-based spatial transformations via convolutional neural networks: A review. Magn Reson Imaging 2019; 64:142-153. [DOI: 10.1016/j.mri.2019.05.037] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Revised: 05/22/2019] [Accepted: 05/26/2019] [Indexed: 12/18/2022]
|
49
|
Convolutional neural network based Alzheimer’s disease classification from magnetic resonance brain images. COGN SYST RES 2019. [DOI: 10.1016/j.cogsys.2018.12.015] [Citation(s) in RCA: 128] [Impact Index Per Article: 25.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
50
|
Zhang S, Han F, Liang Z, Tan J, Cao W, Gao Y, Pomeroy M, Ng K, Hou W. An investigation of CNN models for differentiating malignant from benign lesions using small pathologically proven datasets. Comput Med Imaging Graph 2019; 77:101645. [PMID: 31454710 DOI: 10.1016/j.compmedimag.2019.101645] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2019] [Revised: 07/02/2019] [Accepted: 08/01/2019] [Indexed: 12/14/2022]
Abstract
Cancer has been one of the most threatening diseases to human health. There have been many efforts devoted to the advancement of radiology and transformative tools (e.g. non-invasive computed tomographic or CT imaging) to detect cancer in early stages. One of the major goals is to identify malignant from benign lesions. In recent years, machine deep learning (DL), e.g. convolutional neural network (CNN), has shown encouraging classification performance on medical images. However, DL algorithms always need large datasets with ground truth. Yet in the medical imaging field, especially for cancer imaging, it is difficult to collect such large volume of images with pathological information. Therefore, strategies are needed to learn effectively from small datasets via CNN models. To forward that goal, this paper explores two CNN models by focusing extensively on expansion of training samples from two small pathologically proven datasets (colorectal polyp dataset and lung nodule dataset) and then differentiating malignant from benign lesions. Experimental outcomes indicate that even in very small datasets of less than 70 subjects, malignance can be successfully differentiated from benign via the proposed CNN models, the average AUCs (area under the receiver operating curve) of differentiating colorectal polyps and pulmonary nodules are 0.86 and 0.71, respectively. Our experiments further demonstrate that for these two small datasets, instead of only studying the original raw CT images, feeding additional image features, such as the local binary pattern of the lesions, into the CNN models can significantly improve classification performance. In addition, we find that our explored voxel level CNN model has better performance when facing the small and unbalanced datasets.
Collapse
Affiliation(s)
- Shu Zhang
- Department of Radiology, Stony Brook University, Stony Brook, NY, 11794 USA
| | - Fangfang Han
- Northeastern University, Shenyang, Liaoning, 110819 PR China
| | - Zhengrong Liang
- Department of Radiology, Stony Brook University, Stony Brook, NY, 11794 USA; Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794 USA; Department of Electrical & Computer Engineering, Stony Brook University, Stony Brook, NY, 11794 USA.
| | - Jiaxing Tan
- Department of Computer Science, City University of New York, the Graduate Center, NY, 10016 USA
| | - Weiguo Cao
- Department of Radiology, Stony Brook University, Stony Brook, NY, 11794 USA
| | - Yongfeng Gao
- Department of Radiology, Stony Brook University, Stony Brook, NY, 11794 USA
| | - Marc Pomeroy
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794 USA
| | - Kenneth Ng
- Department of Electrical & Computer Engineering, Stony Brook University, Stony Brook, NY, 11794 USA
| | - Wei Hou
- Department of Preventive Medicine, Stony Brook University, Stony Brook, NY, 11794 USA
| |
Collapse
|