1
|
Nomura Y, Hanaoka S, Hayashi N, Yoshikawa T, Koshino S, Sato C, Tatsuta M, Tanaka Y, Kano S, Nakaya M, Inui S, Kusakabe M, Nakao T, Miki S, Watadani T, Nakaoka R, Shimizu A, Abe O. Performance changes due to differences among annotating radiologists for training data in computerized lesion detection. Int J Comput Assist Radiol Surg 2024; 19:1527-1536. [PMID: 38625446 DOI: 10.1007/s11548-024-03136-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 03/28/2024] [Indexed: 04/17/2024]
Abstract
PURPOSE The quality and bias of annotations by annotators (e.g., radiologists) affect the performance changes in computer-aided detection (CAD) software using machine learning. We hypothesized that the difference in the years of experience in image interpretation among radiologists contributes to annotation variability. In this study, we focused on how the performance of CAD software changes with retraining by incorporating cases annotated by radiologists with varying experience. METHODS We used two types of CAD software for lung nodule detection in chest computed tomography images and cerebral aneurysm detection in magnetic resonance angiography images. Twelve radiologists with different years of experience independently annotated the lesions, and the performance changes were investigated by repeating the retraining of the CAD software twice, with the addition of cases annotated by each radiologist. Additionally, we investigated the effects of retraining using integrated annotations from multiple radiologists. RESULTS The performance of the CAD software after retraining differed among annotating radiologists. In some cases, the performance was degraded compared to that of the initial software. Retraining using integrated annotations showed different performance trends depending on the target CAD software, notably in cerebral aneurysm detection, where the performance decreased compared to using annotations from a single radiologist. CONCLUSIONS Although the performance of the CAD software after retraining varied among the annotating radiologists, no direct correlation with their experience was found. The performance trends differed according to the type of CAD software used when integrated annotations from multiple radiologists were used.
Collapse
Affiliation(s)
- Yukihiro Nomura
- Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, 263-8522, Japan.
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Saori Koshino
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Chiaki Sato
- Department of Radiology, Tokyo Metropolitan Bokutoh Hospital, Tokyo, Japan
| | - Momoko Tatsuta
- Department of Diagnostic Radiology, Kitasato University Hospital, Sagamihara, Kanagawa, Japan
| | - Yuya Tanaka
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Shintaro Kano
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | - Moto Nakaya
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Shohei Inui
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
| | | | - Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Soichiro Miki
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, Tokyo, Japan
| | - Takeyuki Watadani
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Ryusuke Nakaoka
- Division of Medical Devices, National Institute of Health Sciences, Kawasaki, Kanagawa, Japan
| | - Akinobu Shimizu
- Institute of Engineering, Tokyo University of Agriculture and Technology, Tokyo, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, Tokyo, Japan
- Division of Radiology and Biomedical Engineering, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
2
|
Yamada A, Hanaoka S, Takenaga T, Miki S, Yoshikawa T, Nomura Y. Investigation of distributed learning for automated lesion detection in head MR images. Radiol Phys Technol 2024:10.1007/s12194-024-00827-5. [PMID: 39048847 DOI: 10.1007/s12194-024-00827-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 06/11/2024] [Accepted: 07/14/2024] [Indexed: 07/27/2024]
Abstract
In this study, we investigated the application of distributed learning, including federated learning and cyclical weight transfer, in the development of computer-aided detection (CADe) software for (1) cerebral aneurysm detection in magnetic resonance (MR) angiography images and (2) brain metastasis detection in brain contrast-enhanced MR images. We used datasets collected from various institutions, scanner vendors, and magnetic field strengths for each target CADe software. We compared the performance of multiple strategies, including a centralized strategy, in which software development is conducted at a development institution after collecting de-identified data from multiple institutions. Our results showed that the performance of CADe software trained through distributed learning was equal to or better than that trained through the centralized strategy. However, the distributed learning strategies that achieved the highest performance depend on the target CADe software. Hence, distributed learning can become one of the strategies for CADe software development using data collected from multiple institutions.
Collapse
Affiliation(s)
- Aiki Yamada
- Department of Medical Engineering, Graduate School of Science and Engineering, Chiba University, 1-33 Yayoi-Cho, Inage-Ku, Chiba, 263-8522, Japan.
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Tomomi Takenaga
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Soichiro Miki
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
- Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-Cho, Inage-Ku, Chiba, 263-8522, Japan
| |
Collapse
|
3
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
4
|
Ahn JS, Shin S, Yang SA, Park EK, Kim KH, Cho SI, Ock CY, Kim S. Artificial Intelligence in Breast Cancer Diagnosis and Personalized Medicine. J Breast Cancer 2023; 26:405-435. [PMID: 37926067 PMCID: PMC10625863 DOI: 10.4048/jbc.2023.26.e45] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 09/25/2023] [Accepted: 10/06/2023] [Indexed: 11/07/2023] Open
Abstract
Breast cancer is a significant cause of cancer-related mortality in women worldwide. Early and precise diagnosis is crucial, and clinical outcomes can be markedly enhanced. The rise of artificial intelligence (AI) has ushered in a new era, notably in image analysis, paving the way for major advancements in breast cancer diagnosis and individualized treatment regimens. In the diagnostic workflow for patients with breast cancer, the role of AI encompasses screening, diagnosis, staging, biomarker evaluation, prognostication, and therapeutic response prediction. Although its potential is immense, its complete integration into clinical practice is challenging. Particularly, these challenges include the imperatives for extensive clinical validation, model generalizability, navigating the "black-box" conundrum, and pragmatic considerations of embedding AI into everyday clinical environments. In this review, we comprehensively explored the diverse applications of AI in breast cancer care, underlining its transformative promise and existing impediments. In radiology, we specifically address AI in mammography, tomosynthesis, risk prediction models, and supplementary imaging methods, including magnetic resonance imaging and ultrasound. In pathology, our focus is on AI applications for pathologic diagnosis, evaluation of biomarkers, and predictions related to genetic alterations, treatment response, and prognosis in the context of breast cancer diagnosis and treatment. Our discussion underscores the transformative potential of AI in breast cancer management and emphasizes the importance of focused research to realize the full spectrum of benefits of AI in patient care.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Seokhwi Kim
- Department of Pathology, Ajou University School of Medicine, Suwon, Korea
- Department of Biomedical Sciences, Ajou University Graduate School of Medicine, Suwon, Korea.
| |
Collapse
|
5
|
Sahiner B, Chen W, Samala RK, Petrick N. Data drift in medical machine learning: implications and potential remedies. Br J Radiol 2023; 96:20220878. [PMID: 36971405 PMCID: PMC10546450 DOI: 10.1259/bjr.20220878] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 02/16/2023] [Accepted: 02/20/2023] [Indexed: 03/29/2023] Open
Abstract
Data drift refers to differences between the data used in training a machine learning (ML) model and that applied to the model in real-world operation. Medical ML systems can be exposed to various forms of data drift, including differences between the data sampled for training and used in clinical operation, differences between medical practices or context of use between training and clinical use, and time-related changes in patient populations, disease patterns, and data acquisition, to name a few. In this article, we first review the terminology used in ML literature related to data drift, define distinct types of drift, and discuss in detail potential causes within the context of medical applications with an emphasis on medical imaging. We then review the recent literature regarding the effects of data drift on medical ML systems, which overwhelmingly show that data drift can be a major cause for performance deterioration. We then discuss methods for monitoring data drift and mitigating its effects with an emphasis on pre- and post-deployment techniques. Some of the potential methods for drift detection and issues around model retraining when drift is detected are included. Based on our review, we find that data drift is a major concern in medical ML deployment and that more research is needed so that ML models can identify drift early, incorporate effective mitigation strategies and resist performance decay.
Collapse
Affiliation(s)
- Berkman Sahiner
- Center for Devices and Radiological Health, U.S. Food and Drug Administration 10903 New Hampshire Avenue, Silver Spring, MD 20993-0002
| | - Weijie Chen
- Center for Devices and Radiological Health, U.S. Food and Drug Administration 10903 New Hampshire Avenue, Silver Spring, MD 20993-0002
| | - Ravi K. Samala
- Center for Devices and Radiological Health, U.S. Food and Drug Administration 10903 New Hampshire Avenue, Silver Spring, MD 20993-0002
| | - Nicholas Petrick
- Center for Devices and Radiological Health, U.S. Food and Drug Administration 10903 New Hampshire Avenue, Silver Spring, MD 20993-0002
| |
Collapse
|
6
|
Armato SG, Drukker K, Hadjiiski L. AI in medical imaging grand challenges: translation from competition to research benefit and patient care. Br J Radiol 2023; 96:20221152. [PMID: 37698542 PMCID: PMC10546459 DOI: 10.1259/bjr.20221152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 05/24/2023] [Accepted: 07/11/2023] [Indexed: 09/13/2023] Open
Abstract
Artificial intelligence (AI), in one form or another, has been a part of medical imaging for decades. The recent evolution of AI into approaches such as deep learning has dramatically accelerated the application of AI across a wide range of radiologic settings. Despite the promises of AI, developers and users of AI technology must be fully aware of its potential biases and pitfalls, and this knowledge must be incorporated throughout the AI system development pipeline that involves training, validation, and testing. Grand challenges offer an opportunity to advance the development of AI methods for targeted applications and provide a mechanism for both directing and facilitating the development of AI systems. In the process, a grand challenge centralizes (with the challenge organizers) the burden of providing a valid benchmark test set to assess performance and generalizability of participants' models and the collection and curation of image metadata, clinical/demographic information, and the required reference standard. The most relevant grand challenges are those designed to maximize the open-science nature of the competition, with code and trained models deposited for future public access. The ultimate goal of AI grand challenges is to foster the translation of AI systems from competition to research benefit and patient care. Rather than reference the many medical imaging grand challenges that have been organized by groups such as MICCAI, RSNA, AAPM, and grand-challenge.org, this review assesses the role of grand challenges in promoting AI technologies for research advancement and for eventual clinical implementation, including their promises and limitations.
Collapse
Affiliation(s)
- Samuel G Armato
- Department of Radiology, The University of Chicago, Chicago, Illinois, USA
| | - Karen Drukker
- Department of Radiology, The University of Chicago, Chicago, Illinois, USA
| | - Lubomir Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, Michigan, USA
| |
Collapse
|
7
|
Malik M, Yasmin S, Kumar A, Hassan Y, Rizvi Y, Iffat. Can Artificial Intelligence Beat Humans in Detecting Breast Malignancy on Mammograms? Cureus 2023; 15:e46208. [PMID: 37908910 PMCID: PMC10614479 DOI: 10.7759/cureus.46208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/28/2023] [Indexed: 11/02/2023] Open
Abstract
BACKGROUND The study was aimed at identifying how useful Computer-Aided Detection (CAD) could be in reducing false-negative reporting in mammography and early detection of breast cancer at an early stage as the best protection is early detection. MATERIALS AND METHODS This retrospective study was conducted in a tertiary care setup of Atomic Energy Cancer Hospital, Nuclear Medicine, Oncology and Radiotherapy Institute (AECH-NORI), where 33 patients with suspicious findings on mammography and subsequent biopsy-proven malignancy were included. The findings of mammography including the lesion type, breast parenchymal density, and sensitivity of CAD detection, as well as the final biopsy results, were recorded. A second group of 40 normal screening mammograms was also included who had no symptoms, had Breast Imaging-Reporting and Data System category I(BI-RADS I) mammograms, and had no pathology identified on correlative sonomammography as well. RESULTS A total of 35 masses, 11 pleomorphic clusters of microcalcification, five clustered foci of macrocalcification, and nine lesions with pleomorphic clusters of microcalcification and two with pleomorphic clusters of microcalcification only were included. The CAD system was able to identify 26 masses (74%), eight lesions with pleomorphic clusters of microcalcification (72%), five foci of macrocalcification (100%), six lesions with pleomorphic clusters of microcalcification (66%), and two pleomorphic clusters of microcalcification without formed mass (100%). The overall sensitivity of the CAD system was 75.8%. CAD was able to identify 13 out of 16 masses with invasive ductal carcinoma (81.3%), eight out of nine lesions proven as invasive ductal carcinoma with ductal carcinoma in situ (DCIS) (88.9%), two out of five masses with invasive lobular carcinoma (40%), four out of four masses with invasive mammary carcinoma (100%), and zero out of one lesion identified as medullary carcinoma (0%). There was 100% detection for pleomorphic clusters of microcalcification without formed mass with CAD marking two out of two mammograms. CONCLUSION CAD performed better with combined lesions, accurately marked pleomorphic clusters of microcalcification, and identified small lesions in predominant fibrofatty parenchymal density but was not reliable in dense breast, areas of asymmetric increased density, summation artifacts, edematous breast parenchyma, and retroareolar lesions. It also performed poorly with ill-defined lesions of invasive lobular carcinoma. Human intelligence hence beats CAD for the diagnosis of breast malignancy in mammograms as per our experience.
Collapse
Affiliation(s)
- Mariam Malik
- Radiology, Atomic Energy Cancer Hospital, Nuclear Medicine, Oncology and Radiotherapy Institute (NORI), Islamabad, PAK
| | - Saeeda Yasmin
- Internal Medicine, Fatima Jinnah Medical University, Lahore, PAK
| | - Anish Kumar
- Internal Medicine, Ghulam Muhammad Mahar Medical College and Hospital, Sukkur, PAK
| | - Yumna Hassan
- Internal Medicine, Insight Hospital and Medical Center Chicago, Chicago, USA
| | - Yusra Rizvi
- Internal Medicine, Dow University of Health Sciences, Karachi, PAK
| | - Iffat
- Radiology, Atomic Energy Cancer Hospital, Nuclear Medicine, Oncology and Radiotherapy Institute (NORI), Islamabad, PAK
| |
Collapse
|
8
|
Si T, Patra DK, Mallik S, Bandyopadhyay A, Sarkar A, Qin H. Identification of breast lesion through integrated study of gorilla troops optimization and rotation-based learning from MRI images. Sci Rep 2023; 13:11577. [PMID: 37463919 PMCID: PMC10354050 DOI: 10.1038/s41598-023-36300-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Accepted: 05/31/2023] [Indexed: 07/20/2023] Open
Abstract
Breast cancer has emerged as the most life-threatening disease among women around the world. Early detection and treatment of breast cancer are thought to reduce the need for surgery and boost the survival rate. The Magnetic Resonance Imaging (MRI) segmentation techniques for breast cancer diagnosis are investigated in this article. Kapur's entropy-based multilevel thresholding is used in this study to determine optimal values for breast DCE-MRI lesion segmentation using Gorilla Troops Optimization (GTO). An improved GTO, is developed by incorporating Rotational opposition based-learning (RBL) into GTO called (GTORBL) and applied it to the same problem. The proposed approaches are tested on 20 patients' T2 Weighted Sagittal (T2 WS) DCE-MRI 100 slices. The proposed approaches are compared with Tunicate Swarm Algorithm (TSA), Particle Swarm Optimization (PSO), Arithmetic Optimization Algorithm (AOA), Slime Mould Algorithm (SMA), Multi-verse Optimization (MVO), Hidden Markov Random Field (HMRF), Improved Markov Random Field (IMRF), and Conventional Markov Random Field (CMRF). The Dice Similarity Coefficient (DSC), sensitivity, and accuracy of the proposed GTO-based approach is achieved [Formula: see text], [Formula: see text], and [Formula: see text] respectively. Another proposed GTORBL-based segmentation method achieves accuracy values of [Formula: see text] , sensitivity of [Formula: see text] , and DSC of [Formula: see text]. The one-way ANOVA test followed by Tukey HSD and Wilcoxon Signed Rank Test are used to examine the results. Furthermore, Multi-Criteria Decision Making is used to evaluate overall performance focused on sensitivity, accuracy, false-positive rate, precision, specificity, [Formula: see text]-score, Geometric-Mean, and DSC. According to both quantitative and qualitative findings, the proposed strategies outperform other compared methodologies.
Collapse
Affiliation(s)
- Tapas Si
- Department of Computer Science & Engineering, University of Engineering & Management, Jaipur, GURUKUL, Sikar Road (NH-11), Udaipuria Mod, Jaipur, Rajasthan, 303807, India
| | - Dipak Kumar Patra
- Department of Computer Science, Hijli College, Kharagpur, West Bengal, 721306, India
| | - Saurav Mallik
- Department of Environmental Health, Harvard T H Chan School of Public Health, Boston, MA, USA.
| | - Anjan Bandyopadhyay
- School of Computer Engineering, Kalinga Institute of Industrial Technology (KIIT), Bhubaneswar, Odisha, India
| | - Achyuth Sarkar
- Department of Computer Science & Engineering, National Institute of Technology Arunachal Pradesh, Arunachal Pradesh, 791113, India
| | - Hong Qin
- Department of Computer Science and Engineering, University of Tennessee at Chattanooga, Chattanooga, TN, USA.
| |
Collapse
|
9
|
Iqbal S, Qureshi AN, Li J, Choudhry IA, Mahmood T. Dynamic learning for imbalanced data in learning chest X-ray and CT images. Heliyon 2023; 9:e16807. [PMID: 37313141 PMCID: PMC10258426 DOI: 10.1016/j.heliyon.2023.e16807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 05/26/2023] [Accepted: 05/29/2023] [Indexed: 06/15/2023] Open
Abstract
Massive annotated datasets are necessary for networks of deep learning. When a topic is being researched for the first time, as in the situation of the viral epidemic, handling it with limited annotated datasets might be difficult. Additionally, the datasets are quite unbalanced in this situation, with limited findings coming from significant instances of the novel illness. We offer a technique that allows a class balancing algorithm to understand and detect lung disease signs from chest X-ray and CT images. Deep learning techniques are used to train and evaluate images, enabling the extraction of basic visual attributes. The training objects' characteristics, instances, categories, and relative data modeling are all represented probabilistically. It is possible to identify a minority category in the classification process by using an imbalance-based sample analyzer. In order to address the imbalance problem, learning samples from the minority class are examined. The Support Vector Machine (SVM) is used to categorize images in clustering. Physicians and medical professionals can use the CNN model to validate their initial assessments of malignant and benign categorization. The proposed technique for class imbalance (3-Phase Dynamic Learning (3PDL)) and parallel CNN model (Hybrid Feature Fusion (HFF)) for multiple modalities achieve a high F1 score of 96.83 and precision is 96.87, its outstanding accuracy and generalization suggest that it may be utilized to create a pathologist's help tool.
Collapse
Affiliation(s)
- Saeed Iqbal
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124,China
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Pakistan
| | - Adnan N. Qureshi
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124,China
- Beijing Engineering Research Center for IoT Software and Systems, 100124, China
| | - Imran Arshad Choudhry
- Department of Computer Science, Faculty of Information Technology & Computer Science, University of Central Punjab, Lahore, Pakistan
| | - Tariq Mahmood
- Faculty of Information Sciences, University of Education, Vehari Campus, Vehari, 61100, Pakistan
- Artificial Intelligence and Data Analytics (AIDA) Lab, College of Computer & Information Sciences (CCIS), Prince Sultan University, Riyadh, 11586, Kingdom of Saudi Arabia
| |
Collapse
|
10
|
Arce S, Vijay A, Yim E, Spiguel LR, Hanna M. Evaluation of an Artificial Intelligence System for Detection of Invasive Lobular Carcinoma on Digital Mammography. Cureus 2023; 15:e38770. [PMID: 37303390 PMCID: PMC10249706 DOI: 10.7759/cureus.38770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/09/2023] [Indexed: 06/13/2023] Open
Abstract
Introduction Early breast cancer detection with screening mammography has been shown to reduce mortality and improve breast cancer survival. This study aims to evaluate the ability of an artificial intelligence computer-aided detection (AI CAD) system to detect biopsy-proven invasive lobular carcinoma (ILC) on digital mammography. Methods This retrospective study reviewed mammograms of patients who were diagnosed with biopsy-proved ILC between January 1, 2017, and January 1, 2022. All mammograms were analyzed using cmAssist® (CureMetrix, San Diego, California, United States), which is an AI CAD for mammography. The AI CAD sensitivity for detecting ILC on mammography was calculated and further subdivided by lesion type, mass shape, and mass margins. To account for the within-subject correlation, generalized linear mixed models were implemented to investigate the association between age, family history, and breast density and whether the AI detected a false positive or true positive. Odds ratios, 95% confidence intervals, and p-values were also calculated. Results A total of 124 patients with 153 biopsy-proven ILC lesions were included. The AI CAD detected ILC on mammography with a sensitivity of 80%. The AI CAD had the highest sensitivity for detecting calcifications (100%), masses with irregular shape (82%), and masses with spiculated margins (86%). However, 88% of mammograms had at least one false positive mark with an average number of 3.9 false positive marks per mammogram. Conclusion The AI CAD system evaluated was successful in marking the malignancy in digital mammography. However, the numerous annotations confounded the ability to determine its overall accuracy and this reduces its potential use in real-life practice.
Collapse
Affiliation(s)
- Sylvia Arce
- Department of Radiology, University of Florida College of Medicine, Gainesville, USA
| | - Arunima Vijay
- Department of Radiology, University of Florida College of Medicine, Gainesville, USA
| | - Eunice Yim
- Department of Radiology, University of Florida College of Medicine, Gainesville, USA
| | - Lisa R Spiguel
- Department of Surgery, University of Florida College of Medicine, Gainesville, USA
| | - Mariam Hanna
- Department of Radiology, University of Florida College of Medicine, Gainesville, USA
| |
Collapse
|
11
|
Kadhim YA, Khan MU, Mishra A. Deep Learning-Based Computer-Aided Diagnosis (CAD): Applications for Medical Image Datasets. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22228999. [PMID: 36433595 PMCID: PMC9692938 DOI: 10.3390/s22228999] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 11/17/2022] [Accepted: 11/18/2022] [Indexed: 05/26/2023]
Abstract
Computer-aided diagnosis (CAD) has proved to be an effective and accurate method for diagnostic prediction over the years. This article focuses on the development of an automated CAD system with the intent to perform diagnosis as accurately as possible. Deep learning methods have been able to produce impressive results on medical image datasets. This study employs deep learning methods in conjunction with meta-heuristic algorithms and supervised machine-learning algorithms to perform an accurate diagnosis. Pre-trained convolutional neural networks (CNNs) or auto-encoder are used for feature extraction, whereas feature selection is performed using an ant colony optimization (ACO) algorithm. Ant colony optimization helps to search for the best optimal features while reducing the amount of data. Lastly, diagnosis prediction (classification) is achieved using learnable classifiers. The novel framework for the extraction and selection of features is based on deep learning, auto-encoder, and ACO. The performance of the proposed approach is evaluated using two medical image datasets: chest X-ray (CXR) and magnetic resonance imaging (MRI) for the prediction of the existence of COVID-19 and brain tumors. Accuracy is used as the main measure to compare the performance of the proposed approach with existing state-of-the-art methods. The proposed system achieves an average accuracy of 99.61% and 99.18%, outperforming all other methods in diagnosing the presence of COVID-19 and brain tumors, respectively. Based on the achieved results, it can be claimed that physicians or radiologists can confidently utilize the proposed approach for diagnosing COVID-19 patients and patients with specific brain tumors.
Collapse
Affiliation(s)
- Yezi Ali Kadhim
- Department of Modeling and Design of Engineering Systems (MODES), Atilim University, Ankara 06830, Turkey
- Department of Electrical and Electronics Engineering, Atilim University, Incek, Ankara 06830, Turkey
| | - Muhammad Umer Khan
- Department of Mechatronics Engineering, Atilim University, Incek, Ankara 06830, Turkey
| | - Alok Mishra
- Department of Software Engineering, Atilim University, Incek, Ankara 06830, Turkey
- Informatics and Digitalization Group, Molde University College—Specialized University in Logistics, 6410 Molde, Norway
| |
Collapse
|
12
|
Krishnamurthy R, Mummudi N, Goda JS, Chopra S, Heijmen B, Swamidas J. Using Artificial Intelligence for Optimization of the Processes and Resource Utilization in Radiotherapy. JCO Glob Oncol 2022; 8:e2100393. [PMID: 36395438 PMCID: PMC10166445 DOI: 10.1200/go.21.00393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The radiotherapy (RT) process from planning to treatment delivery is a multistep, complex operation involving numerous levels of human-machine interaction and requiring high precision. These steps are labor-intensive and time-consuming and require meticulous coordination between professionals with diverse expertise. We reviewed and summarized the current status and prospects of artificial intelligence and machine learning relevant to the various steps in RT treatment planning and delivery workflow specifically in low- and middle-income countries (LMICs). We also searched the PubMed database using the search terms (Artificial Intelligence OR Machine Learning OR Deep Learning OR Automation OR knowledge-based planning AND Radiotherapy) AND (list of Low- and Middle-Income Countries as defined by the World Bank at the time of writing this review). The search yielded a total of 90 results, of which results with first authors from the LMICs were chosen. The reference lists of retrieved articles were also reviewed to search for more studies. No language restrictions were imposed. A total of 20 research items with unique study objectives conducted with the aim of enhancing RT processes were examined in detail. Artificial intelligence and machine learning can improve the overall efficiency of RT processes by reducing human intervention, aiding decision making, and efficiently executing lengthy, repetitive tasks. This improvement could permit the radiation oncologist to redistribute resources and focus on responsibilities such as patient counseling, education, and research, especially in resource-constrained LMICs.
Collapse
Affiliation(s)
- Revathy Krishnamurthy
- Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India
| | - Naveen Mummudi
- Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India
| | - Jayant Sastri Goda
- Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India
| | - Supriya Chopra
- Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India
| | - Ben Heijmen
- Division of Medical Physics, Department of Radiation Oncology, Erasmus MC Cancer Institute, Erasmus University Rotterdam, Rotterdam, the Netherlands
| | - Jamema Swamidas
- Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India
| |
Collapse
|
13
|
Hamamoto R, Koyama T, Kouno N, Yasuda T, Yui S, Sudo K, Hirata M, Sunami K, Kubo T, Takasawa K, Takahashi S, Machino H, Kobayashi K, Asada K, Komatsu M, Kaneko S, Yatabe Y, Yamamoto N. Introducing AI to the molecular tumor board: one direction toward the establishment of precision medicine using large-scale cancer clinical and biological information. Exp Hematol Oncol 2022; 11:82. [PMID: 36316731 PMCID: PMC9620610 DOI: 10.1186/s40164-022-00333-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 10/05/2022] [Indexed: 11/10/2022] Open
Abstract
Since U.S. President Barack Obama announced the Precision Medicine Initiative in his New Year's State of the Union address in 2015, the establishment of a precision medicine system has been emphasized worldwide, particularly in the field of oncology. With the advent of next-generation sequencers specifically, genome analysis technology has made remarkable progress, and there are active efforts to apply genome information to diagnosis and treatment. Generally, in the process of feeding back the results of next-generation sequencing analysis to patients, a molecular tumor board (MTB), consisting of experts in clinical oncology, genetic medicine, etc., is established to discuss the results. On the other hand, an MTB currently involves a large amount of work, with humans searching through vast databases and literature, selecting the best drug candidates, and manually confirming the status of available clinical trials. In addition, as personalized medicine advances, the burden on MTB members is expected to increase in the future. Under these circumstances, introducing cutting-edge artificial intelligence (AI) technology and information and communication technology to MTBs while reducing the burden on MTB members and building a platform that enables more accurate and personalized medical care would be of great benefit to patients. In this review, we introduced the latest status of elemental technologies that have potential for AI utilization in MTB, and discussed issues that may arise in the future as we progress with AI implementation.
Collapse
Affiliation(s)
- Ryuji Hamamoto
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Takafumi Koyama
- grid.272242.30000 0001 2168 5385Department of Experimental Therapeutics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Nobuji Kouno
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.258799.80000 0004 0372 2033Department of Surgery, Graduate School of Medicine, Kyoto University, Yoshida-konoe-cho, Sakyo-ku, Kyoto, 606-8303 Japan
| | - Tomohiro Yasuda
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.417547.40000 0004 1763 9564Research and Development Group, Hitachi, Ltd., 1-280 Higashi-koigakubo, Kokubunji, Tokyo, 185-8601 Japan
| | - Shuntaro Yui
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.417547.40000 0004 1763 9564Research and Development Group, Hitachi, Ltd., 1-280 Higashi-koigakubo, Kokubunji, Tokyo, 185-8601 Japan
| | - Kazuki Sudo
- grid.272242.30000 0001 2168 5385Department of Experimental Therapeutics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.272242.30000 0001 2168 5385Department of Medical Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Makoto Hirata
- grid.272242.30000 0001 2168 5385Department of Genetic Medicine and Services, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Kuniko Sunami
- grid.272242.30000 0001 2168 5385Department of Laboratory Medicine, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Takashi Kubo
- grid.272242.30000 0001 2168 5385Department of Laboratory Medicine, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Ken Takasawa
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Satoshi Takahashi
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Hidenori Machino
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Kazuma Kobayashi
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Ken Asada
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Masaaki Komatsu
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Syuzo Kaneko
- grid.272242.30000 0001 2168 5385Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.509456.bCancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo, 103-0027 Japan
| | - Yasushi Yatabe
- grid.272242.30000 0001 2168 5385Department of Diagnostic Pathology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan ,grid.272242.30000 0001 2168 5385Division of Molecular Pathology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| | - Noboru Yamamoto
- grid.272242.30000 0001 2168 5385Department of Experimental Therapeutics, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo, 104-0045 Japan
| |
Collapse
|
14
|
Artificial intelligence and machine learning in cancer imaging. COMMUNICATIONS MEDICINE 2022; 2:133. [PMID: 36310650 PMCID: PMC9613681 DOI: 10.1038/s43856-022-00199-0] [Citation(s) in RCA: 59] [Impact Index Per Article: 29.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Accepted: 10/06/2022] [Indexed: 11/16/2022] Open
Abstract
An increasing array of tools is being developed using artificial intelligence (AI) and machine learning (ML) for cancer imaging. The development of an optimal tool requires multidisciplinary engagement to ensure that the appropriate use case is met, as well as to undertake robust development and testing prior to its adoption into healthcare systems. This multidisciplinary review highlights key developments in the field. We discuss the challenges and opportunities of AI and ML in cancer imaging; considerations for the development of algorithms into tools that can be widely used and disseminated; and the development of the ecosystem needed to promote growth of AI and ML in cancer imaging.
Collapse
|
15
|
Baughan N, Douglas L, Giger ML. Past, Present, and Future of Machine Learning and Artificial Intelligence for Breast Cancer Screening. JOURNAL OF BREAST IMAGING 2022; 4:451-459. [PMID: 38416954 DOI: 10.1093/jbi/wbac052] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Indexed: 03/01/2024]
Abstract
Breast cancer screening has evolved substantially over the past few decades because of advancements in new image acquisition systems and novel artificial intelligence (AI) algorithms. This review provides a brief overview of the history, current state, and future of AI in breast cancer screening and diagnosis along with challenges involved in the development of AI systems. Although AI has been developing for interpretation tasks associated with breast cancer screening for decades, its potential to combat the subjective nature and improve the efficiency of human image interpretation is always expanding. The rapid advancement of computational power and deep learning has increased greatly in AI research, with promising performance in detection and classification tasks across imaging modalities. Most AI systems, based on human-engineered or deep learning methods, serve as concurrent or secondary readers, that is, as aids to radiologists for a specific, well-defined task. In the future, AI may be able to perform multiple integrated tasks, making decisions at the level of or surpassing the ability of humans. Artificial intelligence may also serve as a partial primary reader to streamline ancillary tasks, triaging cases or ruling out obvious normal cases. However, before AI is used as an independent, autonomous reader, various challenges need to be addressed, including explainability and interpretability, in addition to repeatability and generalizability, to ensure that AI will provide a significant clinical benefit to breast cancer screening across all populations.
Collapse
Affiliation(s)
- Natalie Baughan
- University of Chicago, Department of Radiology Committee on Medical Physics, Chicago, IL, USA
| | - Lindsay Douglas
- University of Chicago, Department of Radiology Committee on Medical Physics, Chicago, IL, USA
| | - Maryellen L Giger
- University of Chicago, Department of Radiology Committee on Medical Physics, Chicago, IL, USA
| |
Collapse
|
16
|
Syed AH, Khan T. Evolution of research trends in artificial intelligence for breast cancer diagnosis and prognosis over the past two decades: A bibliometric analysis. Front Oncol 2022; 12:854927. [PMID: 36267967 PMCID: PMC9578338 DOI: 10.3389/fonc.2022.854927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 08/30/2022] [Indexed: 01/27/2023] Open
Abstract
Objective In recent years, among the available tools, the concurrent application of Artificial Intelligence (AI) has improved the diagnostic performance of breast cancer screening. In this context, the present study intends to provide a comprehensive overview of the evolution of AI for breast cancer diagnosis and prognosis research using bibliometric analysis. Methodology Therefore, in the present study, relevant peer-reviewed research articles published from 2000 to 2021 were downloaded from the Scopus and Web of Science (WOS) databases and later quantitatively analyzed and visualized using Bibliometrix (R package). Finally, open challenges areas were identified for future research work. Results The present study revealed that the number of literature studies published in AI for breast cancer detection and survival prediction has increased from 12 to 546 between the years 2000 to 2021. The United States of America (USA), the Republic of China, and India are the most productive publication-wise in this field. Furthermore, the USA leads in terms of the total citations; however, hungry and Holland take the lead positions in average citations per year. Wang J is the most productive author, and Zhan J is the most relevant author in this field. Stanford University in the USA is the most relevant affiliation by the number of published articles. The top 10 most relevant sources are Q1 journals with PLOS ONE and computer in Biology and Medicine are the leading journals in this field. The most trending topics related to our study, transfer learning and deep learning, were identified. Conclusion The present findings provide insight and research directions for policymakers and academic researchers for future collaboration and research in AI for breast cancer patients.
Collapse
Affiliation(s)
- Asif Hassan Syed
- Department of Computer Science, Faculty of Computing and Information Technology Rabigh (FCITR), King Abdulaziz University, Jeddah, Saudi Arabia,*Correspondence: Asif Hassan Syed,
| | - Tabrej Khan
- Department of Information Systems, Faculty of Computing and Information Technology Rabigh (FCITR), King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
17
|
A computer aided diagnosis framework for detection and classification of interstitial lung diseases using computed tomography (CT) images. APPLIED NANOSCIENCE 2022. [DOI: 10.1007/s13204-022-02512-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
18
|
CoroNet: Deep Neural Network-Based End-to-End Training for Breast Cancer Diagnosis. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12147080] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
In 2020, according to the publications of both the Global Cancer Observatory (GCO) and the World Health Organization (WHO), breast cancer (BC) represents one of the highest prevalent cancers in women worldwide. Almost 47% of the world’s 100,000 people are diagnosed with breast cancer, among females. Moreover, BC prevails among 38.8% of Egyptian women having cancer. Current deep learning developments have shown the common usage of deep convolutional neural networks (CNNs) for analyzing medical images. Unlike the randomly initialized ones, pre-trained natural image database (ImageNet)-based CNN models may become successfully fine-tuned to obtain improved findings. To conduct the automatic detection of BC by the CBIS-DDSM dataset, a CNN model, namely CoroNet, is proposed. It relies on the Xception architecture, which has been pre-trained on the ImageNet dataset and has been fully trained on whole-image BC according to mammograms. The convolutional design method is used in this paper, since it performs better than the other methods. On the prepared dataset, CoroNet was trained and tested. Experiments show that in a four-class classification, it may attain an overall accuracy of 94.92% (benign mass vs. malignant mass) and (benign calcification vs. malignant calcification). CoroNet has a classification accuracy of 88.67% for the two-class cases (calcifications and masses). The paper concluded that there are promising outcomes that could be improved because more training data are available.
Collapse
|
19
|
Lung Cancer Prediction Using Robust Machine Learning and Image Enhancement Methods on Extracted Gray-Level Co-Occurrence Matrix Features. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136517] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
In the present era, cancer is the leading cause of demise in both men and women worldwide, with low survival rates due to inefficient diagnostic techniques. Recently, researchers have been devising methods to improve prediction performance. In medical image processing, image enhancement can further improve prediction performance. This study aimed to improve lung cancer image quality by utilizing and employing various image enhancement methods, such as image adjustment, gamma correction, contrast stretching, thresholding, and histogram equalization methods. We extracted the gray-level co-occurrence matrix (GLCM) features on enhancement images, and applied and optimized vigorous machine learning classification algorithms, such as the decision tree (DT), naïve Bayes, support vector machine (SVM) with Gaussian, radial base function (RBF), and polynomial. Without the image enhancement method, the highest performance was obtained using SVM, polynomial, and RBF, with accuracy of (99.89%). The image enhancement methods, such as image adjustment, contrast stretching at threshold (0.02, 0.98), and gamma correction at gamma value of 0.9, improved the prediction performance of our analysis on 945 images provided by the Lung Cancer Alliance MRI dataset, which yielded 100% accuracy and 1.00 of AUC using SVM, RBF, and polynomial kernels. The results revealed that the proposed methodology can be very helpful to improve the lung cancer prediction for further diagnosis and prognosis by expert radiologists to decrease the mortality rate.
Collapse
|
20
|
Wang T, Endo M, Ohno Y, Okada S, Makikawa M. Convolutional neural network-based computer-aided diagnosis in Hiesho (cold sensation). Comput Biol Med 2022; 145:105411. [DOI: 10.1016/j.compbiomed.2022.105411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 02/24/2022] [Accepted: 03/13/2022] [Indexed: 11/30/2022]
|
21
|
Thiagarajan P, Khairnar P, Ghosh S. Explanation and Use of Uncertainty Quantified by Bayesian Neural Network Classifiers for Breast Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:815-825. [PMID: 34699354 DOI: 10.1109/tmi.2021.3123300] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Despite the promise of Convolutional neural network (CNN) based classification models for histopathological images, it is infeasible to quantify its uncertainties. Moreover, CNNs may suffer from overfitting when the data is biased. We show that Bayesian-CNN can overcome these limitations by regularizing automatically and by quantifying the uncertainty. We have developed a novel technique to utilize the uncertainties provided by the Bayesian-CNN that significantly improves the performance on a large fraction of the test data (about 6% improvement in accuracy on 77% of test data). Further, we provide a novel explanation for the uncertainty by projecting the data into a low dimensional space through a nonlinear dimensionality reduction technique. This dimensionality reduction enables interpretation of the test data through visualization and reveals the structure of the data in a low dimensional feature space. We show that the Bayesian-CNN can perform much better than the state-of-the-art transfer learning CNN (TL-CNN) by reducing the false negative and false positive by 11% and 7.7% respectively for the present data set. It achieves this performance with only 1.86 million parameters as compared to 134.33 million for TL-CNN. Besides, we modify the Bayesian-CNN by introducing a stochastic adaptive activation function. The modified Bayesian-CNN performs slightly better than Bayesian-CNN on all performance metrics and significantly reduces the number of false negatives and false positives (3% reduction for both). We also show that these results are statistically significant by performing McNemar's statistical significance test. This work shows the advantages of Bayesian-CNN against the state-of-the-art, explains and utilizes the uncertainties for histopathological images. It should find applications in various medical image classifications.
Collapse
|
22
|
Deep Learning–Based Brain Computed Tomography Image Classification with Hyperparameter Optimization through Transfer Learning for Stroke. Diagnostics (Basel) 2022; 12:diagnostics12040807. [PMID: 35453855 PMCID: PMC9026481 DOI: 10.3390/diagnostics12040807] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 03/21/2022] [Accepted: 03/24/2022] [Indexed: 02/04/2023] Open
Abstract
Brain computed tomography (CT) is commonly used for evaluating the cerebral condition, but immediately and accurately interpreting emergent brain CT images is tedious, even for skilled neuroradiologists. Deep learning networks are commonly employed for medical image analysis because they enable efficient computer-aided diagnosis. This study proposed the use of convolutional neural network (CNN)-based deep learning models for efficient classification of strokes based on unenhanced brain CT image findings into normal, hemorrhage, infarction, and other categories. The included CNN models were CNN-2, VGG-16, and ResNet-50, all of which were pretrained through transfer learning with various data sizes, mini-batch sizes, and optimizers. Their performance in classifying unenhanced brain CT images was tested thereafter. This performance was then compared with the outcomes in other studies on deep learning–based hemorrhagic or ischemic stroke diagnoses. The results revealed that among our CNN-2, VGG-16, and ResNet-50 analyzed by considering several hyperparameters and environments, the CNN-2 and ResNet-50 outperformed the VGG-16, with an accuracy of 0.9872; however, ResNet-50 required a longer time to present the outcome than did the other networks. Moreover, our models performed much better than those reported previously. In conclusion, after appropriate hyperparameter optimization, our deep learning–based models can be applied to clinical scenarios where neurologist or radiologist may need to verify whether their patients have a hemorrhage stroke, an infarction, and any other symptom.
Collapse
|
23
|
Kizildag Yirgin I, Koyluoglu YO, Seker ME, Ozkan Gurdal S, Ozaydin AN, Ozcinar B, Cabioğlu N, Ozmen V, Aribal E. Diagnostic Performance of AI for Cancers Registered in A Mammography Screening Program: A Retrospective Analysis. Technol Cancer Res Treat 2022; 21:15330338221075172. [PMID: 35060413 PMCID: PMC8796113 DOI: 10.1177/15330338221075172] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Purpose: To evaluate the performance of an artificial intelligence (AI) algorithm in a simulated screening setting and its effectiveness in detecting missed and interval cancers. Methods: Digital mammograms were collected from Bahcesehir Mammographic Screening Program which is the first organized, population-based, 10-year (2009-2019) screening program in Turkey. In total, 211 mammograms were extracted from the archive of the screening program in this retrospective study. One hundred ten of them were diagnosed as breast cancer (74 screen-detected, 27 interval, 9 missed), 101 of them were negative mammograms with a follow-up for at least 24 months. Cancer detection rates of radiologists in the screening program were compared with an AI system. Three different mammography assessment methods were used: (1) 2 radiologists’ assessment at screening center, (2) AI assessment based on the established risk score threshold, (3) a hypothetical radiologist and AI team-up in which AI was considered to be the third reader. Results: Area under curve was 0.853 (95% CI = 0.801-0.905) and the cut-off value for risk score was 34.5% with a sensitivity of 72.8% and a specificity of 88.3% for AI cancer detection in ROC analysis. Cancer detection rates were 67.3% for radiologists, 72.7% for AI, and 83.6% for radiologist and AI team-up. AI detected 72.7% of all cancers on its own, of which 77.5% were screen-detected, 15% were interval cancers, and 7.5% were missed cancers. Conclusion: AI may potentially enhance the capacity of breast cancer screening programs by increasing cancer detection rates and decreasing false-negative evaluations.
Collapse
Affiliation(s)
| | | | | | | | | | - Beyza Ozcinar
- Istanbul University, School of Medicine, Istanbul, Turkey
| | | | - Vahit Ozmen
- Istanbul University, School of Medicine, Istanbul, Turkey
| | - Erkin Aribal
- Acibadem M.A.A University School of Medicine, Istanbul, Turkey
| |
Collapse
|
24
|
Editorial Comment: Artificial Intelligence in Mammography-Our New Reality. AJR Am J Roentgenol 2022; 219:381. [PMID: 35018799 DOI: 10.2214/ajr.22.27345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
25
|
Artificial Intelligence (AI) for Screening Mammography, From the AI Special Series on AI Applications. AJR Am J Roentgenol 2022; 219:369-380. [PMID: 35018795 DOI: 10.2214/ajr.21.27071] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Artificial intelligence (AI) applications for screening mammography are being marketed for clinical use in the interpretative domains of lesion detection and diagnosis, triage, and breast density assessment, and in the noninterpretive domains of breast cancer risk assessment, image quality control, image acquisition, and dose reduction. Evidence in support of these nascent applications, particularly for lesion detection and diagnosis, is largely based on multireader studies with cancer-enriched datasets rather than rigorous clinical evaluation aligned with the application's specific intended clinical use. This article reviews commercial AI algorithms for screening mammography that are currently available for clinical practice, their use, and evidence supporting their performance. Clinical implementation considerations, such as workflow integration, governance, and ethical issues, are also described. In addition, the future of AI for screening mammography is discussed, including the development of interpretive and noninterpretive AI applications and strategic priorities for research and development.
Collapse
|
26
|
Agrawal T, Choudhary P. Segmentation and classification on chest radiography: a systematic survey. THE VISUAL COMPUTER 2022; 39:875-913. [PMID: 35035008 PMCID: PMC8741572 DOI: 10.1007/s00371-021-02352-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/01/2021] [Indexed: 06/14/2023]
Abstract
Chest radiography (X-ray) is the most common diagnostic method for pulmonary disorders. A trained radiologist is required for interpreting the radiographs. But sometimes, even experienced radiologists can misinterpret the findings. This leads to the need for computer-aided detection diagnosis. For decades, researchers were automatically detecting pulmonary disorders using the traditional computer vision (CV) methods. Now the availability of large annotated datasets and computing hardware has made it possible for deep learning to dominate the area. It is now the modus operandi for feature extraction, segmentation, detection, and classification tasks in medical imaging analysis. This paper focuses on the research conducted using chest X-rays for the lung segmentation and detection/classification of pulmonary disorders on publicly available datasets. The studies performed using the Generative Adversarial Network (GAN) models for segmentation and classification on chest X-rays are also included in this study. GAN has gained the interest of the CV community as it can help with medical data scarcity. In this study, we have also included the research conducted before the popularity of deep learning models to have a clear picture of the field. Many surveys have been published, but none of them is dedicated to chest X-rays. This study will help the readers to know about the existing techniques, approaches, and their significance.
Collapse
Affiliation(s)
- Tarun Agrawal
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| | - Prakash Choudhary
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| |
Collapse
|
27
|
Chang YW, An JK, Choi N, Ko KH, Kim KH, Han K, Ryu JK. Artificial Intelligence for Breast Cancer Screening in Mammography (AI-STREAM): A Prospective Multicenter Study Design in Korea Using AI-based CADe/x. J Breast Cancer 2022; 25:57-68. [PMID: 35133093 PMCID: PMC8876543 DOI: 10.4048/jbc.2022.25.e4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 11/18/2021] [Accepted: 12/05/2021] [Indexed: 11/30/2022] Open
Abstract
Purpose Artificial intelligence (AI)-based computer-aided detection/diagnosis (CADe/x) has helped improve radiologists’ performance and provides results equivalent or superior to those of radiologists’ alone. This prospective multicenter cohort study aims to generate real-world evidence on the overall benefits and disadvantages of using AI-based CADe/x for breast cancer detection in a population-based breast cancer screening program comprising Korean women aged ≥ 40 years. The purpose of this report is to compare the diagnostic accuracy of radiologists with and without the use of AI-based CADe/x in mammography readings for breast cancer screening of Korean women with average breast cancer risk. Methods Approximately 32,714 participants will be enrolled between February 2021 and December 2022 at 5 study sites in Korea. A radiologist specializing in breast imaging will interpret the mammography readings with or without the use of AI-based CADe/x. If recall is required, further diagnostic workup will be conducted to confirm the cancer detected on screening. The findings will be recorded for all participants regardless of their screening status to identify study participants with breast cancer diagnosis within both 1 year and 2 years of screening. The national cancer registry database will be reviewed in 2026 and 2027, and the results of this study are expected to be published in 2027. In addition, the diagnostic accuracy of general radiologists and radiologists specializing in breast imaging from another hospital with or without the use of AI-based CADe/x will be compared considering mammography readings for breast cancer screening. Discussion The Artificial Intelligence for Breast Cancer Screening in Mammography (AI-STREAM) study is a prospective multicenter study that aims to compare the diagnostic accuracy of radiologists with and without the use of AI-based CADe/x in mammography readings for breast cancer screening of women with average breast cancer risk. AI-STREAM is currently in the patient enrollment phase. Trial Registration ClinicalTrials.gov Identifier: NCT05024591
Collapse
Affiliation(s)
- Yun-Woo Chang
- Department of Radiology, Soonchunhyang University Seoul Hospital, Soonchunhyang University College of Medicine, Seoul, Korea
| | - Jin Kyung An
- Department of Radiology, Nowon Eulji University Hospital, Eulji University School of medicine, Seoul, Korea
| | - Nami Choi
- Department of Radiology, Konkuk University Medical Center, Konkuk University School of medicine, Seoul, Korea
| | - Kyung Hee Ko
- Department of Radiology, CHA Bundang Medical Center, Seongnam, Korea
| | | | - Kyunghwa Han
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Jung Kyu Ryu
- Department of Radiology, Kyung Hee University Hospital at Gangdong, College of Medicine, Kyung Hee University, Seoul, Korea
| |
Collapse
|
28
|
Yousef R, Gupta G, Yousef N, Khari M. A holistic overview of deep learning approach in medical imaging. MULTIMEDIA SYSTEMS 2022; 28:881-914. [PMID: 35079207 PMCID: PMC8776556 DOI: 10.1007/s00530-021-00884-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 12/23/2021] [Indexed: 05/07/2023]
Abstract
Medical images are a rich source of invaluable necessary information used by clinicians. Recent technologies have introduced many advancements for exploiting the most of this information and use it to generate better analysis. Deep learning (DL) techniques have been empowered in medical images analysis using computer-assisted imaging contexts and presenting a lot of solutions and improvements while analyzing these images by radiologists and other specialists. In this paper, we present a survey of DL techniques used for variety of tasks along with the different medical image's modalities to provide critical review of the recent developments in this direction. We have organized our paper to provide significant contribution of deep leaning traits and learn its concepts, which is in turn helpful for non-expert in medical society. Then, we present several applications of deep learning (e.g., segmentation, classification, detection, etc.) which are commonly used for clinical purposes for different anatomical site, and we also present the main key terms for DL attributes like basic architecture, data augmentation, transfer learning, and feature selection methods. Medical images as inputs to deep learning architectures will be the mainstream in the coming years, and novel DL techniques are predicted to be the core of medical images analysis. We conclude our paper by addressing some research challenges and the suggested solutions for them found in literature, and also future promises and directions for further developments.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Gaurav Gupta
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Nabhan Yousef
- Electronics and Communication Engineering, Marwadi University, Rajkot, Gujrat India
| | - Manju Khari
- Jawaharlal Nehru University, New Delhi, India
| |
Collapse
|
29
|
A case-based interpretable deep learning model for classification of mass lesions in digital mammography. NAT MACH INTELL 2021. [DOI: 10.1038/s42256-021-00423-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
30
|
Retico A, Avanzo M, Boccali T, Bonacorsi D, Botta F, Cuttone G, Martelli B, Salomoni D, Spiga D, Trianni A, Stasi M, Iori M, Talamonti C. Enhancing the impact of Artificial Intelligence in Medicine: A joint AIFM-INFN Italian initiative for a dedicated cloud-based computing infrastructure. Phys Med 2021; 91:140-150. [PMID: 34801873 DOI: 10.1016/j.ejmp.2021.10.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 10/04/2021] [Accepted: 10/05/2021] [Indexed: 12/23/2022] Open
Abstract
Artificial Intelligence (AI) techniques have been implemented in the field of Medical Imaging for more than forty years. Medical Physicists, Clinicians and Computer Scientists have been collaborating since the beginning to realize software solutions to enhance the informative content of medical images, including AI-based support systems for image interpretation. Despite the recent massive progress in this field due to the current emphasis on Radiomics, Machine Learning and Deep Learning, there are still some barriers to overcome before these tools are fully integrated into the clinical workflows to finally enable a precision medicine approach to patients' care. Nowadays, as Medical Imaging has entered the Big Data era, innovative solutions to efficiently deal with huge amounts of data and to exploit large and distributed computing resources are urgently needed. In the framework of a collaboration agreement between the Italian Association of Medical Physicists (AIFM) and the National Institute for Nuclear Physics (INFN), we propose a model of an intensive computing infrastructure, especially suited for training AI models, equipped with secure storage systems, compliant with data protection regulation, which will accelerate the development and extensive validation of AI-based solutions in the Medical Imaging field of research. This solution can be developed and made operational by Physicists and Computer Scientists working on complementary fields of research in Physics, such as High Energy Physics and Medical Physics, who have all the necessary skills to tailor the AI-technology to the needs of the Medical Imaging community and to shorten the pathway towards the clinical applicability of AI-based decision support systems.
Collapse
Affiliation(s)
- Alessandra Retico
- National Institute for Nuclear Physics (INFN), Pisa Division, 56127 Pisa, Italy
| | - Michele Avanzo
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy
| | - Tommaso Boccali
- National Institute for Nuclear Physics (INFN), Pisa Division, 56127 Pisa, Italy
| | - Daniele Bonacorsi
- University of Bologna, 40126 Bologna, Italy; INFN, Bologna Division, 40126 Bologna, Italy
| | - Francesca Botta
- Medical Physics Unit, Istituto Europeo di oncologia IRCCS, 20141 Milan, Italy
| | - Giacomo Cuttone
- INFN, Southern National Laboratory (LNS), 95123 Catania, Italy
| | | | | | | | - Annalisa Trianni
- Medical Physics Unit, Ospedale Santa Chiara APSS, 38122 Trento, Italy
| | - Michele Stasi
- Medical Physics Unit, A.O. Ordine Mauriziano di Torino, 10128 Torino, Italy
| | - Mauro Iori
- Medical Physics Unit, Azienda USL-IRCCS di Reggio Emilia, 42122 Reggio Emilia, Italy.
| | - Cinzia Talamonti
- Department Biomedical Experimental and Clinical Science "Mario Serio", University of Florence, 50134 Florence, Italy; INFN, Florence Division, 50134 Florence, Italy
| |
Collapse
|
31
|
Siviengphanom S, Gandomkar Z, Lewis SJ, Brennan PC. Mammography-based Radiomics in Breast Cancer: A Scoping Review of Current Knowledge and Future Needs. Acad Radiol 2021; 29:1228-1247. [PMID: 34799256 DOI: 10.1016/j.acra.2021.09.025] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/14/2021] [Accepted: 09/26/2021] [Indexed: 12/19/2022]
Abstract
RATIONALE AND OBJECTIVES Breast cancer is a highly complex heterogeneous disease. Current validated prognostic factors (e.g., histological grade, lymph node involvement, receptor status, and proliferation index), as well as multigene tests (e.g., Oncotype DX and PAM50) are helpful to describe breast cancer characteristics and predict the chance of recurrence risk and survival. Nevertheless, they are invasive and cannot capture a complete heterogeneity of the entire breast tumor resulting in up to 30% of patients being either over- or under-treated for breast cancer. Furthermore, multigene testings are time consuming and expensive. Radiomics is emerging as a reliable, accurate, non-invasive, and cost-effective approach of using quantitative image features to classify breast cancer characteristics and predict patient outcomes. Several recent radiomics reviews have been conducted in breast cancer, however, specific mammography-based radiomics studies have not been well discussed. This scoping review aims to assess and summarize the current evidence on the potential usefulness of mammography-based (i.e., digital mammography, digital breast tomosynthesis, and contrast-enhanced mammography) radiomics in predicting factors that describe breast cancer characteristics, recurrence, and survival. MATERIALS AND METHODS PubMed database and eligible text reference were searched using relevant keywords to identify studies published between 2015 and December 19, 2020. Studies collected were screened and assessed based on the inclusion and exclusion criteria. RESULTS Eighteen eligible studies were included and organized into three main sections: radiomics predicting breast cancer characteristics, radiomics predicting breast cancer recurrence and survival, and radiomics integrating with clinical data. Majority of publications reported retrospective studies while three studies examined prospective cohorts. Encouraging results were reported, suggesting the potential clinical value of mammography-based radiomics. Further efforts are required to standardize radiomics approaches and catalogue reproducible and relevant mammographic radiomic features. The role of integrating radiomics with other information is discussed. CONCLUSION The potential role of mammography-based radiomics appears promising but more efforts are required to further evaluate its reliability as a routine clinical tool.
Collapse
Affiliation(s)
- Somphone Siviengphanom
- Discipline of Medical Imaging Sciences, Sydney School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Level 7, Susan Wakil Health Building D18, Sydney, NSW 2006, Australia..
| | - Ziba Gandomkar
- Discipline of Medical Imaging Sciences, Sydney School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Level 7, Susan Wakil Health Building D18, Sydney, NSW 2006, Australia
| | - Sarah J Lewis
- Discipline of Medical Imaging Sciences, Sydney School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Level 7, Susan Wakil Health Building D18, Sydney, NSW 2006, Australia
| | - Patrick C Brennan
- Discipline of Medical Imaging Sciences, Sydney School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Level 7, Susan Wakil Health Building D18, Sydney, NSW 2006, Australia
| |
Collapse
|
32
|
Abstract
This article gives a brief overview of the development of artificial intelligence in clinical breast imaging. For multiple decades, artificial intelligence (AI) methods have been developed and translated for breast imaging tasks such as detection, diagnosis, and assessing response to therapy. As imaging modalities arise to support breast cancer screening programs and diagnostic examinations, including full-field digital mammography, breast tomosynthesis, ultrasound, and MRI, AI techniques parallel the efforts with more complex algorithms, faster computers, and larger data sets. AI methods include human-engineered radiomics algorithms and deep learning methods. Examples of these AI-supported clinical tasks are given along with commentary on the future.
Collapse
Affiliation(s)
- Qiyuan Hu
- Committee on Medical Physics, Department of Radiology, The University of Chicago, 5841 S Maryland Avenue, MC2026, Chicago, IL 60637, USA
| | - Maryellen L Giger
- Committee on Medical Physics, Department of Radiology, The University of Chicago, 5841 S Maryland Avenue, MC2026, Chicago, IL 60637, USA.
| |
Collapse
|
33
|
Bera K, Braman N, Gupta A, Velcheti V, Madabhushi A. Predicting cancer outcomes with radiomics and artificial intelligence in radiology. Nat Rev Clin Oncol 2021; 19:132-146. [PMID: 34663898 DOI: 10.1038/s41571-021-00560-7] [Citation(s) in RCA: 239] [Impact Index Per Article: 79.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/03/2021] [Indexed: 12/14/2022]
Abstract
The successful use of artificial intelligence (AI) for diagnostic purposes has prompted the application of AI-based cancer imaging analysis to address other, more complex, clinical needs. In this Perspective, we discuss the next generation of challenges in clinical decision-making that AI tools can solve using radiology images, such as prognostication of outcome across multiple cancers, prediction of response to various treatment modalities, discrimination of benign treatment confounders from true progression, identification of unusual response patterns and prediction of the mutational and molecular profile of tumours. We describe the evolution of and opportunities for AI in oncology imaging, focusing on hand-crafted radiomic approaches and deep learning-derived representations, with examples of their application for decision support. We also address the challenges faced on the path to clinical adoption, including data curation and annotation, interpretability, and regulatory and reimbursement issues. We hope to demystify AI in radiology for clinicians by helping them to understand its limitations and challenges, as well as the opportunities it provides as a decision-support tool in cancer management.
Collapse
Affiliation(s)
- Kaustav Bera
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA.,Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Nathaniel Braman
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA.,Tempus Labs, Chicago, IL, USA
| | - Amit Gupta
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA.,Department of Radiology, University Hospitals Cleveland Medical Center, Cleveland, OH, USA
| | - Vamsidhar Velcheti
- Department of Hematology and Oncology, NYU Langone Health, New York, NY, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA. .,Louis Stokes Cleveland Veterans Medical Center, Cleveland, OH, USA.
| |
Collapse
|
34
|
Nomura Y, Hanaoka S, Takenaga T, Nakao T, Shibata H, Miki S, Yoshikawa T, Watadani T, Hayashi N, Abe O. Preliminary study of generalized semiautomatic segmentation for 3D voxel labeling of lesions based on deep learning. Int J Comput Assist Radiol Surg 2021; 16:1901-1913. [PMID: 34652606 DOI: 10.1007/s11548-021-02504-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 09/17/2021] [Indexed: 11/28/2022]
Abstract
PURPOSE The three-dimensional (3D) voxel labeling of lesions requires significant radiologists' effort in the development of computer-aided detection software. To reduce the time required for the 3D voxel labeling, we aimed to develop a generalized semiautomatic segmentation method based on deep learning via a data augmentation-based domain generalization framework. In this study, we investigated whether a generalized semiautomatic segmentation model trained using two types of lesion can segment previously unseen types of lesion. METHODS We targeted lung nodules in chest CT images, liver lesions in hepatobiliary-phase images of Gd-EOB-DTPA-enhanced MR imaging, and brain metastases in contrast-enhanced MR images. For each lesion, the 32 × 32 × 32 isotropic volume of interest (VOI) around the center of gravity of the lesion was extracted. The VOI was input into a 3D U-Net model to define the label of the lesion. For each type of target lesion, we compared five types of data augmentation and two types of input data. RESULTS For all considered target lesions, the highest dice coefficients among the training patterns were obtained when using a combination of the existing data augmentation-based domain generalization framework and random monochrome inversion and when using the resized VOI as the input image. The dice coefficients were 0.639 ± 0.124 for the lung nodules, 0.660 ± 0.137 for the liver lesions, and 0.727 ± 0.115 for the brain metastases. CONCLUSIONS Our generalized semiautomatic segmentation model could label unseen three types of lesion with different contrasts from the surroundings. In addition, the resized VOI as the input image enables the adaptation to the various sizes of lesions even when the size distribution differed between the training set and the test set.
Collapse
Affiliation(s)
- Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan. .,Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, 263-8522, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Tomomi Takenaga
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Hisaichi Shibata
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Soichiro Miki
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takeyuki Watadani
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
35
|
Durkee MS, Abraham R, Clark MR, Giger ML. Artificial Intelligence and Cellular Segmentation in Tissue Microscopy Images. THE AMERICAN JOURNAL OF PATHOLOGY 2021; 191:1693-1701. [PMID: 34129842 PMCID: PMC8485056 DOI: 10.1016/j.ajpath.2021.05.022] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 05/07/2021] [Accepted: 05/17/2021] [Indexed: 02/05/2023]
Abstract
With applications in object detection, image feature extraction, image classification, and image segmentation, artificial intelligence is facilitating high-throughput analysis of image data in a variety of biomedical imaging disciplines, ranging from radiology and pathology to cancer biology and immunology. Specifically, a growth in research on deep learning has led to the widespread application of computer-visualization techniques for analyzing and mining data from biomedical images. The availability of open-source software packages and the development of novel, trainable deep neural network architectures has led to increased accuracy in cell detection and segmentation algorithms. By automating cell segmentation, it is now possible to mine quantifiable cellular and spatio-cellular features from microscopy images, providing insight into the organization of cells in various pathologies. This mini-review provides an overview of the current state of the art in deep learning- and artificial intelligence-based methods of segmentation and data mining of cells in microscopy images of tissue.
Collapse
Affiliation(s)
- Madeleine S Durkee
- Department of Radiology and the Committee on Medical Physics, University of Chicago, Chicago, Illinois.
| | - Rebecca Abraham
- Department of Medicine, Section of Rheumatology and Gwen Knapp Center for Lupus and Immunology Research, University of Chicago, Chicago, Illinois
| | - Marcus R Clark
- Department of Medicine, Section of Rheumatology and Gwen Knapp Center for Lupus and Immunology Research, University of Chicago, Chicago, Illinois
| | - Maryellen L Giger
- Department of Radiology and the Committee on Medical Physics, University of Chicago, Chicago, Illinois.
| |
Collapse
|
36
|
A Histogram-Based Low-Complexity Approach for the Effective Detection of COVID-19 Disease from CT and X-ray Images. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11198867] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The global COVID-19 pandemic certainly has posed one of the more difficult challenges for researchers in the current century. The development of an automatic diagnostic tool, able to detect the disease in its early stage, could undoubtedly offer a great advantage to the battle against the pandemic. In this regard, most of the research efforts have been focused on the application of Deep Learning (DL) techniques to chest images, including traditional chest X-rays (CXRs) and Computed Tomography (CT) scans. Although these approaches have demonstrated their effectiveness in detecting the COVID-19 disease, they are of huge computational complexity and require large datasets for training. In addition, there may not exist a large amount of COVID-19 CXRs and CT scans available to researchers. To this end, in this paper, we propose an approach based on the evaluation of the histogram from a common class of images that is considered as the target. A suitable inter-histogram distance measures how this target histogram is far from the histogram evaluated on a test image: if this distance is greater than a threshold, the test image is labeled as anomaly, i.e., the scan belongs to a patient affected by COVID-19 disease. Extensive experimental results and comparisons with some benchmark state-of-the-art methods support the effectiveness of the developed approach, as well as demonstrate that, at least when the images of the considered datasets are homogeneous enough (i.e., a few outliers are present), it is not really needed to resort to complex-to-implement DL techniques, in order to attain an effective detection of the COVID-19 disease. Despite the simplicity of the proposed approach, all the considered metrics (i.e., accuracy, precision, recall, and F-measure) attain a value of 1.0 under the selected datasets, a result comparable to the corresponding state-of-the-art DNN approaches, but with a remarkable computational simplicity.
Collapse
|
37
|
Huang G, Wei X, Tang H, Bai F, Lin X, Xue D. A systematic review and meta-analysis of diagnostic performance and physicians' perceptions of artificial intelligence (AI)-assisted CT diagnostic technology for the classification of pulmonary nodules. J Thorac Dis 2021; 13:4797-4811. [PMID: 34527320 PMCID: PMC8411165 DOI: 10.21037/jtd-21-810] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Accepted: 07/09/2021] [Indexed: 12/26/2022]
Abstract
Background Lung cancer was the second most commonly diagnosed cancer and the leading cause of cancer death in 2020. Although artificial intelligence (AI)-assisted diagnostic technologies have shown promise and has been used in clinical practice in recent years, no products related to AI-assisted CT diagnostic technologies for the classification of pulmonary nodules have been approved by the National Medical Products Administration in China. The objective of this article was to systematically review the diagnostic performance of AI-assisted CT diagnostic technology for the classification of pulmonary nodules as benign or malignant and to analyze physicians’ perceptions of this technology in China. Methods All relevant studies from 6 literature databases were searched and screened according to the inclusion and exclusion criteria. Data were extracted and the study quality was assessed by two reviewers. The study heterogeneity and publication bias were estimated. A questionnaire survey on the perceptions of physicians was conducted in 9 public tertiary hospitals in China. A meta-analysis, meta-regression and univariate logistic model were used in the systematic review and to explore the association of physicians’ perceptions with their rate of support for the clinical application of the technology. Results Twenty-seven studies with 5,727 pulmonary nodules were finally included in the meta-analysis. We found that the quality of the included studies was generally acceptable and that the pooled sensitivity and specificity of AI-assisted CT diagnostic technology for the classification of pulmonary nodules as benign or malignant were 0.90 and 0.89, respectively. The pooled diagnostic odds ratio (DOR) was 70.33. The majority of the surveyed physicians in China perceived “reduced workload for radiologists” and “improved diagnostic efficiency” as the important benefits of this technology. In addition, diagnostic accuracy (including misdiagnosis) and practical experience were significantly associated with whether physicians supported its clinical application. Conclusions In the context of lung cancer diagnosis, AI-assisted CT diagnostic technology for the classification of pulmonary nodules as benign or malignant has good diagnostic performance, but its specificity needs to be improved.
Collapse
Affiliation(s)
- Guo Huang
- NHC Key Laboratory of Health Technology Assessment (Fudan University), Department of Hospital Management, School of Public Health, Fudan University, Shanghai, China
| | - Xuefeng Wei
- Health Commission of Gansu Province, Lanzhou, China
| | - Huiqin Tang
- Health Commission of Hubei Province, Wuhan, China
| | - Fei Bai
- National Center for Medical Service Administration, Beijing, China
| | - Xia Lin
- National Center for Medical Service Administration, Beijing, China
| | - Di Xue
- NHC Key Laboratory of Health Technology Assessment (Fudan University), Department of Hospital Management, School of Public Health, Fudan University, Shanghai, China
| |
Collapse
|
38
|
Dual-energy three-compartment breast imaging for compositional biomarkers to improve detection of malignant lesions. COMMUNICATIONS MEDICINE 2021; 1:29. [PMID: 35602210 PMCID: PMC9053198 DOI: 10.1038/s43856-021-00024-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 07/26/2021] [Indexed: 11/08/2022] Open
Abstract
Background While breast imaging such as full-field digital mammography and digital breast tomosynthesis have helped to reduced breast cancer mortality, issues with low specificity exist resulting in unnecessary biopsies. The fundamental information used in diagnostic decisions are primarily based in lesion morphology. We explore a dual-energy compositional breast imaging technique known as three-compartment breast (3CB) to show how the addition of compositional information improves malignancy detection. Methods Women who presented with Breast Imaging-Reporting and Data System (BI-RADS) diagnostic categories 4 or 5 and who were scheduled for breast biopsies were consecutively recruited for both standard mammography and 3CB imaging. Computer-aided detection (CAD) software was used to assign a morphology-based prediction of malignancy for all biopsied lesions. Compositional signatures for all lesions were calculated using 3CB imaging and a neural network evaluated CAD predictions with composition to predict a new probability of malignancy. CAD and neural network predictions were compared to the biopsy pathology. Results The addition of 3CB compositional information to CAD improves malignancy predictions resulting in an area under the receiver operating characteristic curve (AUC) of 0.81 (confidence interval (CI) of 0.74–0.88) on a held-out test set, while CAD software alone achieves an AUC of 0.69 (CI 0.60–0.78). We also identify that invasive breast cancers have a unique compositional signature characterized by reduced lipid content and increased water and protein content when compared to surrounding tissues. Conclusion Clinically, 3CB may potentially provide increased accuracy in predicting malignancy and a feasible avenue to explore compositional breast imaging biomarkers. Leong et al. use a dual-energy mammography technique termed three-compartment breast imaging to study breast composition and detect malignant lesions. Combining compositional information with morphology-based computer-aided diagnosis improves breast cancer detection. Breast cancers are detected by mammography. This study explored the use of a particular kind of mammography technique to obtain information about the composition of cancerous and non-cancerous breast tissue. This technique provided measures of lipid (fat), water, and protein content in addition to shape characteristics provided from standard mammography. Adding information about the composition of the tissue to its shape characteristics resulted in an increased ability to distinguish invasive cancerous tissue from unaffected surroundings. Invasive breast cancer tissues were also found to exhibit lower lipid, higher protein and higher water content when compared to other non-invasive, non-cancerous breast tissues in which cancer was suspected. Our findings highlight the added value of including the composition of breast tissue when deciding if biopsy of the suspicious tissue is warranted.
Collapse
|
39
|
Anatomic Point-Based Lung Region with Zone Identification for Radiologist Annotation and Machine Learning for Chest Radiographs. J Digit Imaging 2021; 34:922-931. [PMID: 34327625 DOI: 10.1007/s10278-021-00494-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 06/02/2021] [Accepted: 07/05/2021] [Indexed: 10/20/2022] Open
Abstract
Our objective is to investigate the reliability and usefulness of anatomic point-based lung zone segmentation on chest radiographs (CXRs) as a reference standard framework and to evaluate the accuracy of automated point placement. Two hundred frontal CXRs were presented to two radiologists who identified five anatomic points: two at the lung apices, one at the top of the aortic arch, and two at the costophrenic angles. Of these 1000 anatomic points, 161 (16.1%) were obscured (mostly by pleural effusions). Observer variations were investigated. Eight anatomic zones then were automatically generated from the manually placed anatomic points, and a prototype algorithm was developed using the point-based lung zone segmentation to detect cardiomegaly and levels of diaphragm and pleural effusions. A trained U-Net neural network was used to automatically place these five points within 379 CXRs of an independent database. Intra- and inter-observer variation in mean distance between corresponding anatomic points was larger for obscured points (8.7 mm and 20 mm, respectively) than for visible points (4.3 mm and 7.6 mm, respectively). The computer algorithm using the point-based lung zone segmentation could diagnostically measure the cardiothoracic ratio and diaphragm position or pleural effusion. The mean distance between corresponding points placed by the radiologist and by the neural network was 6.2 mm. The network identified 95% of the radiologist-indicated points with only 3% of network-identified points being false-positives. In conclusion, a reliable anatomic point-based lung segmentation method for CXRs has been developed with expected utility for establishing reference standards for machine learning applications.
Collapse
|
40
|
Nomura Y, Hanaoka S, Nakao T, Hayashi N, Yoshikawa T, Miki S, Watadani T, Abe O. Performance changes due to differences in training data for cerebral aneurysm detection in head MR angiography images. Jpn J Radiol 2021; 39:1039-1048. [PMID: 34125368 DOI: 10.1007/s11604-021-01153-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 06/08/2021] [Indexed: 01/10/2023]
Abstract
PURPOSE The performance of computer-aided detection (CAD) software depends on the quality and quantity of the dataset used for machine learning. If the data characteristics in development and practical use are different, the performance of CAD software degrades. In this study, we investigated changes in detection performance due to differences in training data for cerebral aneurysm detection software in head magnetic resonance angiography images. MATERIALS AND METHODS We utilized three types of CAD software for cerebral aneurysm detection in MRA images, which were based on 3D local intensity structure analysis, graph-based features, and convolutional neural network. For each type of CAD software, we compared three types of training pattern, which were two types of training using single-site data and one type of training using multisite data. We also carried out internal and external evaluations. RESULTS In training using single-site data, the performance of CAD software largely and unpredictably fluctuated when the training dataset was changed. Training using multisite data did not show the lowest performance among the three training patterns for any CAD software and dataset. CONCLUSION The training of cerebral aneurysm detection software using data collected from multiple sites is desirable to ensure the stable performance of the software.
Collapse
Affiliation(s)
- Yukihiro Nomura
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Shouhei Hanaoka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takahiro Nakao
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Naoto Hayashi
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Soichiro Miki
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takeyuki Watadani
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
41
|
Agrawal T, Choudhary P. FocusCovid: automated COVID-19 detection using deep learning with chest X-ray images. EVOLVING SYSTEMS 2021; 13:519-533. [PMID: 38624806 PMCID: PMC8106902 DOI: 10.1007/s12530-021-09385-2] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 04/29/2021] [Indexed: 12/28/2022]
Abstract
COVID-19 is an acronym for coronavirus disease 2019. Initially, it was called 2019-nCoV, and later International Committee on Taxonomy of Viruses (ICTV) termed it SARS-CoV-2. On 30th January 2020, the World Health Organization (WHO) declared it a pandemic. With an increasing number of COVID-19 cases, the available medical infrastructure is essential to detect the suspected cases. Medical imaging techniques such as Computed Tomography (CT), chest radiography can play an important role in the early screening and detection of COVID-19 cases. It is important to identify and separate the cases to stop the further spread of the virus. Artificial Intelligence can play an important role in COVID-19 detection and decreases the workload on collapsing medical infrastructure. In this paper, a deep convolutional neural network-based architecture is proposed for the COVID-19 detection using chest radiographs. The dataset used to train and test the model is available on different public repositories. Despite having the high accuracy of the model, the decision on COVID-19 should be made in consultation with the trained medical clinician.
Collapse
Affiliation(s)
- Tarun Agrawal
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| | - Prakash Choudhary
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| |
Collapse
|
42
|
Al-Karawi D, Al-Assam H, Du H, Sayasneh A, Landolfo C, Timmerman D, Bourne T, Jassim S. An Evaluation of the Effectiveness of Image-based Texture Features Extracted from Static B-mode Ultrasound Images in Distinguishing between Benign and Malignant Ovarian Masses. ULTRASONIC IMAGING 2021; 43:124-138. [PMID: 33629652 DOI: 10.1177/0161734621998091] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Significant successes in machine learning approaches to image analysis for various applications have energized strong interest in automated diagnostic support systems for medical images. The evolving in-depth understanding of the way carcinogenesis changes the texture of cellular networks of a mass/tumor has been informing such diagnostics systems with use of more suitable image texture features and their extraction methods. Several texture features have been recently applied in discriminating malignant and benign ovarian masses by analysing B-mode images from ultrasound scan of the ovary with different levels of performance. However, comparative performance evaluation of these reported features using common sets of clinically approved images is lacking. This paper presents an empirical evaluation of seven commonly used texture features (histograms, moments of histogram, local binary patterns [256-bin and 59-bin], histograms of oriented gradients, fractal dimensions, and Gabor filter), using a collection of 242 ultrasound scan images of ovarian masses of various pathological characteristics. The evaluation examines not only the effectiveness of classification schemes based on the individual texture features but also the effectiveness of various combinations of these schemes using the simple majority-rule decision level fusion. Trained support vector machine classifiers on the individual texture features without any specific pre-processing, achieve levels of accuracy between 75% and 85% where the seven moments and the 256-bin LBP are at the lower end while the Gabor filter is at the upper end. Combining the classification results of the top k (k = 3, 5, 7) best performing features further improve the overall accuracy to a level between 86% and 90%. These evaluation results demonstrate that each of the investigated image-based texture features provides informative support in distinguishing benign or malignant ovarian masses.
Collapse
Affiliation(s)
| | - Hisham Al-Assam
- School of Computing, University of Buckingham, Buckingham, UK
| | - Hongbo Du
- School of Computing, University of Buckingham, Buckingham, UK
| | - Ahmad Sayasneh
- Faculty of Life Sciences and Medicine, St Thomas Hospital, King's College London, London, UK
| | - Chiara Landolfo
- Department of Development and Regeneration; Obstetrics and Gynaecology, University Hospitals KU Leuven, Leuven, Belgium
- Queen Charlotte's and Chelsea Hospital, Imperial College, London, UK
- Dipartimento Scienze della Salute della Donna, Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy
| | - Dirk Timmerman
- Department of Development and Regeneration; Obstetrics and Gynaecology, University Hospitals KU Leuven, Leuven, Belgium
| | - Tom Bourne
- Department of Development and Regeneration; Obstetrics and Gynaecology, University Hospitals KU Leuven, Leuven, Belgium
- Queen Charlotte's and Chelsea Hospital, Imperial College, London, UK
| | - Sabah Jassim
- School of Computing, University of Buckingham, Buckingham, UK
| |
Collapse
|
43
|
Shen L, Kann BH, Taylor RA, Shung DL. The Clinician's Guide to the Machine Learning Galaxy. Front Physiol 2021; 12:658583. [PMID: 33889088 PMCID: PMC8056037 DOI: 10.3389/fphys.2021.658583] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 03/10/2021] [Indexed: 11/13/2022] Open
Affiliation(s)
- Lin Shen
- Department of Medicine, Brigham and Women's Hospital, Boston, MA, United States.,Division of Gastroenterology, Hepatology and Endoscopy, Brigham and Women's Hospital, Boston, MA, United States
| | - Benjamin H Kann
- Department of Radiation Oncology, Dana-Farber Cancer Institute/Brigham and Women's Hospital and Harvard Medical School, Boston, MA, United States.,Artificial Intelligence in Medicine Program, Brigham and Women's Hospital, Boston, MA, United States
| | - R Andrew Taylor
- Department of Emergency Medicine, Yale School of Medicine, New Haven, CT, United States
| | - Dennis L Shung
- Section of Digestive Diseases, Department of Medicine, Yale School of Medicine, New Haven, CT, United States
| |
Collapse
|
44
|
Chan HP, Hadjiiski LM, Samala RK. Computer-aided diagnosis in the era of deep learning. Med Phys 2021; 47:e218-e227. [PMID: 32418340 DOI: 10.1002/mp.13764] [Citation(s) in RCA: 90] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Revised: 05/13/2019] [Accepted: 05/13/2019] [Indexed: 12/15/2022] Open
Abstract
Computer-aided diagnosis (CAD) has been a major field of research for the past few decades. CAD uses machine learning methods to analyze imaging and/or nonimaging patient data and makes assessment of the patient's condition, which can then be used to assist clinicians in their decision-making process. The recent success of the deep learning technology in machine learning spurs new research and development efforts to improve CAD performance and to develop CAD for many other complex clinical tasks. In this paper, we discuss the potential and challenges in developing CAD tools using deep learning technology or artificial intelligence (AI) in general, the pitfalls and lessons learned from CAD in screening mammography and considerations needed for future implementation of CAD or AI in clinical use. It is hoped that the past experiences and the deep learning technology will lead to successful advancement and lasting growth in this new era of CAD, thereby enabling CAD to deliver intelligent aids to improve health care.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109-5842, USA
| | - Lubomir M Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109-5842, USA
| | - Ravi K Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109-5842, USA
| |
Collapse
|
45
|
Balagurunathan Y, Mitchell R, El Naqa I. Requirements and reliability of AI in the medical context. Phys Med 2021; 83:72-78. [PMID: 33721700 PMCID: PMC8915137 DOI: 10.1016/j.ejmp.2021.02.024] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 02/04/2021] [Accepted: 02/23/2021] [Indexed: 12/12/2022] Open
Abstract
The digital information age has been a catalyst in creating a renewed interest in Artificial Intelligence (AI) approaches, especially the subclass of computer algorithms that are popularly grouped into Machine Learning (ML). These methods have allowed one to go beyond limited human cognitive ability into understanding the complexity in the high dimensional data. Medical sciences have seen a steady use of these methods but have been slow in adoption to improve patient care. There are some significant impediments that have diluted this effort, which include availability of curated diverse data sets for model building, reliable human-level interpretation of these models, and reliable reproducibility of these methods for routine clinical use. Each of these aspects has several limiting conditions that need to be balanced out, considering the data/model building efforts, clinical implementation, integration cost to translational effort with minimal patient level harm, which may directly impact future clinical adoption. In this review paper, we will assess each aspect of the problem in the context of reliable use of the ML methods in oncology, as a representative study case, with the goal to safeguard utility and improve patient care in medicine in general.
Collapse
Affiliation(s)
| | - Ross Mitchell
- Department of Machine Learning, H. Lee. Moffitt Cancer Center, Tampa, FL, USA; Health Data Services, H. Lee. Moffitt Cancer Center, Tampa, FL, USA.
| | - Issam El Naqa
- Department of Machine Learning, H. Lee. Moffitt Cancer Center, Tampa, FL, USA.
| |
Collapse
|
46
|
Mun SK, Wong KH, Lo SCB, Li Y, Bayarsaikhan S. Artificial Intelligence for the Future Radiology Diagnostic Service. Front Mol Biosci 2021; 7:614258. [PMID: 33585563 PMCID: PMC7875875 DOI: 10.3389/fmolb.2020.614258] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 12/29/2020] [Indexed: 12/18/2022] Open
Abstract
Radiology historically has been a leader of digital transformation in healthcare. The introduction of digital imaging systems, picture archiving and communication systems (PACS), and teleradiology transformed radiology services over the past 30 years. Radiology is again at the crossroad for the next generation of transformation, possibly evolving as a one-stop integrated diagnostic service. Artificial intelligence and machine learning promise to offer radiology new powerful new digital tools to facilitate the next transformation. The radiology community has been developing computer-aided diagnosis (CAD) tools based on machine learning (ML) over the past 20 years. Among various AI techniques, deep-learning convolutional neural networks (CNN) and its variants have been widely used in medical image pattern recognition. Since the 1990s, many CAD tools and products have been developed. However, clinical adoption has been slow due to a lack of substantial clinical advantages, difficulties integrating into existing workflow, and uncertain business models. This paper proposes three pathways for AI's role in radiology beyond current CNN based capabilities 1) improve the performance of CAD, 2) improve the productivity of radiology service by AI-assisted workflow, and 3) develop radiomics that integrate the data from radiology, pathology, and genomics to facilitate the emergence of a new integrated diagnostic service.
Collapse
Affiliation(s)
- Seong K. Mun
- Arlington Innovation Center:Health Research, Virginia Tech-Washington DC Area, Arlington, VA, United States
| | | | | | | | | |
Collapse
|
47
|
|
48
|
Kim KH, Lee SH. Applications of Artificial Intelligence in Mammography from a Development and Validation Perspective. JOURNAL OF THE KOREAN SOCIETY OF RADIOLOGY 2021; 82:12-28. [PMID: 36237466 PMCID: PMC9432399 DOI: 10.3348/jksr.2020.0205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 01/21/2021] [Accepted: 01/26/2021] [Indexed: 11/15/2022]
|
49
|
Drukker K, Yan P, Sibley A, Wang G. Biomedical imaging and analysis through deep learning. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00004-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
50
|
Giger ML. AI/Machine Learning in Medical Imaging. Mol Imaging 2021. [DOI: 10.1016/b978-0-12-816386-3.00052-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
|