1
|
Ouis MY, A Akhloufi M. Deep learning for report generation on chest X-ray images. Comput Med Imaging Graph 2024; 111:102320. [PMID: 38134726 DOI: 10.1016/j.compmedimag.2023.102320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 11/13/2023] [Accepted: 11/29/2023] [Indexed: 12/24/2023]
Abstract
Medical imaging, specifically chest X-ray image analysis, is a crucial component of early disease detection and screening in healthcare. Deep learning techniques, such as convolutional neural networks (CNNs), have emerged as powerful tools for computer-aided diagnosis (CAD) in chest X-ray image analysis. These techniques have shown promising results in automating tasks such as classification, detection, and segmentation of abnormalities in chest X-ray images, with the potential to surpass human radiologists. In this review, we provide an overview of the importance of chest X-ray image analysis, historical developments, impact of deep learning techniques, and availability of labeled databases. We specifically focus on advancements and challenges in radiology report generation using deep learning, highlighting potential future advancements in this area. The use of deep learning for report generation has the potential to reduce the burden on radiologists, improve patient care, and enhance the accuracy and efficiency of chest X-ray image analysis in medical imaging.
Collapse
Affiliation(s)
- Mohammed Yasser Ouis
- Perception, Robotics and Intelligent Machines Lab(PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1C 3E9, Canada.
| | - Moulay A Akhloufi
- Perception, Robotics and Intelligent Machines Lab(PRIME), Department of Computer Science, Université de Moncton, Moncton, NB E1C 3E9, Canada.
| |
Collapse
|
2
|
Gürsoy E, Kaya Y. An overview of deep learning techniques for COVID-19 detection: methods, challenges, and future works. MULTIMEDIA SYSTEMS 2023; 29:1603-1627. [PMID: 37261262 PMCID: PMC10039775 DOI: 10.1007/s00530-023-01083-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 03/20/2023] [Indexed: 06/02/2023]
Abstract
The World Health Organization (WHO) declared a pandemic in response to the coronavirus COVID-19 in 2020, which resulted in numerous deaths worldwide. Although the disease appears to have lost its impact, millions of people have been affected by this virus, and new infections still occur. Identifying COVID-19 requires a reverse transcription-polymerase chain reaction test (RT-PCR) or analysis of medical data. Due to the high cost and time required to scan and analyze medical data, researchers are focusing on using automated computer-aided methods. This review examines the applications of deep learning (DL) and machine learning (ML) in detecting COVID-19 using medical data such as CT scans, X-rays, cough sounds, MRIs, ultrasound, and clinical markers. First, the data preprocessing, the features used, and the current COVID-19 detection methods are divided into two subsections, and the studies are discussed. Second, the reported publicly available datasets, their characteristics, and the potential comparison materials mentioned in the literature are presented. Third, a comprehensive comparison is made by contrasting the similar and different aspects of the studies. Finally, the results, gaps, and limitations are summarized to stimulate the improvement of COVID-19 detection methods, and the study concludes by listing some future research directions for COVID-19 classification.
Collapse
Affiliation(s)
- Ercan Gürsoy
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, 01250 Adana, Turkey
| | - Yasin Kaya
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, 01250 Adana, Turkey
| |
Collapse
|
3
|
Field EL, Tam W, Moore N, McEntee M. Efficacy of Artificial Intelligence in the Categorisation of Paediatric Pneumonia on Chest Radiographs: A Systematic Review. CHILDREN 2023; 10:children10030576. [PMID: 36980134 PMCID: PMC10047666 DOI: 10.3390/children10030576] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/04/2023] [Accepted: 03/15/2023] [Indexed: 03/19/2023]
Abstract
This study aimed to systematically review the literature to synthesise and summarise the evidence surrounding the efficacy of artificial intelligence (AI) in classifying paediatric pneumonia on chest radiographs (CXRs). Following the initial search of studies that matched the pre-set criteria, their data were extracted using a data extraction tool, and the included studies were assessed via critical appraisal tools and risk of bias. Results were accumulated, and outcome measures analysed included sensitivity, specificity, accuracy, and area under the curve (AUC). Five studies met the inclusion criteria. The highest sensitivity was by an ensemble AI algorithm (96.3%). DenseNet201 obtained the highest level of specificity and accuracy (94%, 95%). The most outstanding AUC value was achieved by the VGG16 algorithm (96.2%). Some of the AI models achieved close to 100% diagnostic accuracy. To assess the efficacy of AI in a clinical setting, these AI models should be compared to that of radiologists. The included and evaluated AI algorithms showed promising results. These algorithms can potentially ease and speed up diagnosis once the studies are replicated and their performances are assessed in clinical settings, potentially saving millions of lives.
Collapse
Affiliation(s)
- Erica Louise Field
- Discipline of Medical Imaging and Radiation Therapy, University College Cork, College Road, T12 K8AF Cork, Ireland
| | - Winnie Tam
- Department of Midwifery and Radiography, University of London, Northampton Square, London EC1V 0HB, UK
- Correspondence:
| | - Niamh Moore
- Discipline of Medical Imaging and Radiation Therapy, University College Cork, College Road, T12 K8AF Cork, Ireland
| | - Mark McEntee
- Discipline of Medical Imaging and Radiation Therapy, University College Cork, College Road, T12 K8AF Cork, Ireland
| |
Collapse
|
4
|
Potnis KC, Ross JS, Aneja S, Gross CP, Richman IB. Artificial Intelligence in Breast Cancer Screening: Evaluation of FDA Device Regulation and Future Recommendations. JAMA Intern Med 2022; 182:1306-1312. [PMID: 36342705 PMCID: PMC10623674 DOI: 10.1001/jamainternmed.2022.4969] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Importance Contemporary approaches to artificial intelligence (AI) based on deep learning have generated interest in the application of AI to breast cancer screening (BCS). The US Food and Drug Administration (FDA) has approved several next-generation AI products indicated for BCS in recent years; however, questions regarding their accuracy, appropriate use, and clinical utility remain. Objectives To describe the current FDA regulatory process for AI products, summarize the evidence used to support FDA clearance and approval of AI products indicated for BCS, consider the advantages and limitations of current regulatory approaches, and suggest ways to improve the current system. Evidence Review Premarket notifications and other publicly available documents used for FDA clearance and approval of AI products indicated for BCS from January 1, 2017, to December 31, 2021. Findings Nine AI products indicated for BCS for identification of suggestive lesions and mammogram triage were included. Most of the products had been cleared through the 510(k) pathway, and all clearances were based on previously collected retrospective data; 6 products used multicenter designs; 7 products used enriched data; and 4 lacked details on whether products were externally validated. Test performance measures, including sensitivity, specificity, and area under the curve, were the main outcomes reported. Most of the devices used tissue biopsy as the criterion standard for BCS accuracy evaluation. Other clinical outcome measures, including cancer stage at diagnosis and interval cancer detection, were not reported for any of the devices. Conclusions and Relevance The findings of this review suggest important gaps in reporting of data sources, data set type, validation approach, and clinical utility assessment. As AI-assisted reading becomes more widespread in BCS and other radiologic examinations, strengthened FDA evidentiary regulatory standards, development of postmarketing surveillance, a focus on clinically meaningful outcomes, and stakeholder engagement will be critical for ensuring the safety and efficacy of these products.
Collapse
Affiliation(s)
| | - Joseph S Ross
- Section of General Medicine, Department of Medicine, Yale School of Medicine, New Haven, Connecticut
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, Connecticut
- Department of Health Policy and Management, Yale School of Public Health, New Haven, Connecticut
| | - Sanjay Aneja
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, Connecticut
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, Connecticut
| | - Cary P Gross
- Section of General Medicine, Department of Medicine, Yale School of Medicine, New Haven, Connecticut
- Cancer Outcomes, Public Policy, and Effectiveness Research Center, Yale School of Medicine, New Haven, Connecticut
- Department of Chronic Disease Epidemiology, Yale School of Public Health, New Haven, Connecticut
| | - Ilana B Richman
- Section of General Medicine, Department of Medicine, Yale School of Medicine, New Haven, Connecticut
- Cancer Outcomes, Public Policy, and Effectiveness Research Center, Yale School of Medicine, New Haven, Connecticut
| |
Collapse
|
5
|
Kadhim YA, Khan MU, Mishra A. Deep Learning-Based Computer-Aided Diagnosis (CAD): Applications for Medical Image Datasets. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22228999. [PMID: 36433595 PMCID: PMC9692938 DOI: 10.3390/s22228999] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 11/17/2022] [Accepted: 11/18/2022] [Indexed: 05/26/2023]
Abstract
Computer-aided diagnosis (CAD) has proved to be an effective and accurate method for diagnostic prediction over the years. This article focuses on the development of an automated CAD system with the intent to perform diagnosis as accurately as possible. Deep learning methods have been able to produce impressive results on medical image datasets. This study employs deep learning methods in conjunction with meta-heuristic algorithms and supervised machine-learning algorithms to perform an accurate diagnosis. Pre-trained convolutional neural networks (CNNs) or auto-encoder are used for feature extraction, whereas feature selection is performed using an ant colony optimization (ACO) algorithm. Ant colony optimization helps to search for the best optimal features while reducing the amount of data. Lastly, diagnosis prediction (classification) is achieved using learnable classifiers. The novel framework for the extraction and selection of features is based on deep learning, auto-encoder, and ACO. The performance of the proposed approach is evaluated using two medical image datasets: chest X-ray (CXR) and magnetic resonance imaging (MRI) for the prediction of the existence of COVID-19 and brain tumors. Accuracy is used as the main measure to compare the performance of the proposed approach with existing state-of-the-art methods. The proposed system achieves an average accuracy of 99.61% and 99.18%, outperforming all other methods in diagnosing the presence of COVID-19 and brain tumors, respectively. Based on the achieved results, it can be claimed that physicians or radiologists can confidently utilize the proposed approach for diagnosing COVID-19 patients and patients with specific brain tumors.
Collapse
Affiliation(s)
- Yezi Ali Kadhim
- Department of Modeling and Design of Engineering Systems (MODES), Atilim University, Ankara 06830, Turkey
- Department of Electrical and Electronics Engineering, Atilim University, Incek, Ankara 06830, Turkey
| | - Muhammad Umer Khan
- Department of Mechatronics Engineering, Atilim University, Incek, Ankara 06830, Turkey
| | - Alok Mishra
- Department of Software Engineering, Atilim University, Incek, Ankara 06830, Turkey
- Informatics and Digitalization Group, Molde University College—Specialized University in Logistics, 6410 Molde, Norway
| |
Collapse
|
6
|
Chen X, Wang X, Zhang K, Fung KM, Thai TC, Moore K, Mannel RS, Liu H, Zheng B, Qiu Y. Recent advances and clinical applications of deep learning in medical image analysis. Med Image Anal 2022; 79:102444. [DOI: 10.1016/j.media.2022.102444] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 03/09/2022] [Accepted: 04/01/2022] [Indexed: 02/07/2023]
|
7
|
Wagner MW, Namdar K, Biswas A, Monah S, Khalvati F, Ertl-Wagner BB. Radiomics, machine learning, and artificial intelligence-what the neuroradiologist needs to know. Neuroradiology 2021; 63:1957-1967. [PMID: 34537858 PMCID: PMC8449698 DOI: 10.1007/s00234-021-02813-9] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 09/09/2021] [Indexed: 01/04/2023]
Abstract
PURPOSE Artificial intelligence (AI) is playing an ever-increasing role in Neuroradiology. METHODS When designing AI-based research in neuroradiology and appreciating the literature, it is important to understand the fundamental principles of AI. Training, validation, and test datasets must be defined and set apart as priorities. External validation and testing datasets are preferable, when feasible. The specific type of learning process (supervised vs. unsupervised) and the machine learning model also require definition. Deep learning (DL) is an AI-based approach that is modelled on the structure of neurons of the brain; convolutional neural networks (CNN) are a commonly used example in neuroradiology. RESULTS Radiomics is a frequently used approach in which a multitude of imaging features are extracted from a region of interest and subsequently reduced and selected to convey diagnostic or prognostic information. Deep radiomics uses CNNs to directly extract features and obviate the need for predefined features. CONCLUSION Common limitations and pitfalls in AI-based research in neuroradiology are limited sample sizes ("small-n-large-p problem"), selection bias, as well as overfitting and underfitting.
Collapse
Affiliation(s)
- Matthias W Wagner
- Division of Neuroradiology, The Hospital for Sick Children, Toronto, Canada
- Department of Medical Imaging, University of Toronto, 555 University Ave, Toronto, ON, M5G 1X8, Canada
| | - Khashayar Namdar
- Neurosciences and Mental Health Program, SickKids Research Institute, Toronto, Canada
| | - Asthik Biswas
- Division of Neuroradiology, The Hospital for Sick Children, Toronto, Canada
- Department of Medical Imaging, University of Toronto, 555 University Ave, Toronto, ON, M5G 1X8, Canada
| | - Suranna Monah
- Division of Neuroradiology, The Hospital for Sick Children, Toronto, Canada
| | - Farzad Khalvati
- Neurosciences and Mental Health Program, SickKids Research Institute, Toronto, Canada
- Department of Medical Imaging, University of Toronto, 555 University Ave, Toronto, ON, M5G 1X8, Canada
| | - Birgit B Ertl-Wagner
- Division of Neuroradiology, The Hospital for Sick Children, Toronto, Canada.
- Department of Medical Imaging, University of Toronto, 555 University Ave, Toronto, ON, M5G 1X8, Canada.
| |
Collapse
|
8
|
Çallı E, Sogancioglu E, van Ginneken B, van Leeuwen KG, Murphy K. Deep learning for chest X-ray analysis: A survey. Med Image Anal 2021; 72:102125. [PMID: 34171622 DOI: 10.1016/j.media.2021.102125] [Citation(s) in RCA: 98] [Impact Index Per Article: 32.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 05/17/2021] [Accepted: 05/27/2021] [Indexed: 12/14/2022]
Abstract
Recent advances in deep learning have led to a promising performance in many medical image analysis tasks. As the most commonly performed radiological exam, chest radiographs are a particularly important modality for which a variety of applications have been researched. The release of multiple, large, publicly available chest X-ray datasets in recent years has encouraged research interest and boosted the number of publications. In this paper, we review all studies using deep learning on chest radiographs published before March 2021, categorizing works by task: image-level prediction (classification and regression), segmentation, localization, image generation and domain adaptation. Detailed descriptions of all publicly available datasets are included and commercial systems in the field are described. A comprehensive discussion of the current state of the art is provided, including caveats on the use of public datasets, the requirements of clinically useful systems and gaps in the current literature.
Collapse
Affiliation(s)
- Erdi Çallı
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands.
| | - Ecem Sogancioglu
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Kicky G van Leeuwen
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| | - Keelin Murphy
- Radboud University Medical Center, Institute for Health Sciences, Department of Medical Imaging, Nijmegen, the Netherlands
| |
Collapse
|
9
|
Patcas R, Timofte R, Volokitin A, Agustsson E, Eliades T, Eichenberger M, Bornstein MM. Facial attractiveness of cleft patients: a direct comparison between artificial-intelligence-based scoring and conventional rater groups. Eur J Orthod 2020; 41:428-433. [PMID: 30788496 DOI: 10.1093/ejo/cjz007] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
OBJECTIVES To evaluate facial attractiveness of treated cleft patients and controls by artificial intelligence (AI) and to compare these results with panel ratings performed by laypeople, orthodontists, and oral surgeons. MATERIALS AND METHODS Frontal and profile images of 20 treated left-sided cleft patients (10 males, mean age: 20.5 years) and 10 controls (5 males, mean age: 22.1 years) were evaluated for facial attractiveness with dedicated convolutional neural networks trained on >17 million ratings for attractiveness and compared to the assessments of 15 laypeople, 14 orthodontists, and 10 oral surgeons performed on a visual analogue scale (n = 2323 scorings). RESULTS AI evaluation of cleft patients (mean score: 4.75 ± 1.27) was comparable to human ratings (laypeople: 4.24 ± 0.81, orthodontists: 4.82 ± 0.94, oral surgeons: 4.74 ± 0.83) and was not statistically different (all Ps ≥ 0.19). Facial attractiveness of controls was rated significantly higher by humans than AI (all Ps ≤ 0.02), which yielded lower scores than in cleft subjects. Variance was considerably large in all human rating groups when considering cases separately, and especially accentuated in the assessment of cleft patients (coefficient of variance-laypeople: 38.73 ± 9.64, orthodontists: 32.56 ± 8.21, oral surgeons: 42.19 ± 9.80). CONCLUSIONS AI-based results were comparable with the average scores of cleft patients seen in all three rating groups (with especially strong agreement to both professional panels) but overall lower for control cases. The variance observed in panel ratings revealed a large imprecision based on a problematic absence of unity. IMPLICATION Current panel-based evaluations of facial attractiveness suffer from dispersion-related issues and remain practically unavailable for patients. AI could become a helpful tool to describe facial attractiveness, but the present results indicate that important adjustments are needed on AI models, to improve the interpretation of the impact of cleft features on facial attractiveness.
Collapse
Affiliation(s)
- Raphael Patcas
- Clinic of Orthodontics and Pediatric Dentistry, Center of Dental Medicine, University of Zurich, Switzerland
| | - Radu Timofte
- Computer Vision Laboratory, D-ITET, ETH Zurich, Switzerland
| | - Anna Volokitin
- Computer Vision Laboratory, D-ITET, ETH Zurich, Switzerland
| | | | - Theodore Eliades
- Clinic of Orthodontics and Pediatric Dentistry, Center of Dental Medicine, University of Zurich, Switzerland
| | - Martina Eichenberger
- Clinic of Orthodontics and Pediatric Dentistry, Center of Dental Medicine, University of Zurich, Switzerland
| | - Michael Marc Bornstein
- Oral and Maxillofacial Radiology, Applied Oral Sciences, Faculty of Dentistry, The University of Hong Kong, Prince Philip Dental Hospital, Hong Kong SAR, China
| |
Collapse
|
10
|
El Naqa I, Haider MA, Giger ML, Ten Haken RK. Artificial Intelligence: reshaping the practice of radiological sciences in the 21st century. Br J Radiol 2020; 93:20190855. [PMID: 31965813 DOI: 10.1259/bjr.20190855] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Advances in computing hardware and software platforms have led to the recent resurgence in artificial intelligence (AI) touching almost every aspect of our daily lives by its capability for automating complex tasks or providing superior predictive analytics. AI applications are currently spanning many diverse fields from economics to entertainment, to manufacturing, as well as medicine. Since modern AI's inception decades ago, practitioners in radiological sciences have been pioneering its development and implementation in medicine, particularly in areas related to diagnostic imaging and therapy. In this anniversary article, we embark on a journey to reflect on the learned lessons from past AI's chequered history. We further summarize the current status of AI in radiological sciences, highlighting, with examples, its impressive achievements and effect on re-shaping the practice of medical imaging and radiotherapy in the areas of computer-aided detection, diagnosis, prognosis, and decision support. Moving beyond the commercial hype of AI into reality, we discuss the current challenges to overcome, for AI to achieve its promised hope of providing better precision healthcare for each patient while reducing cost burden on their families and the society at large.
Collapse
Affiliation(s)
- Issam El Naqa
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA
| | - Masoom A Haider
- Department of Medical Imaging and Lunenfeld-Tanenbaum Research Institute, University of Toronto, Toronto, ON, Canada
| | | | - Randall K Ten Haken
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
11
|
Kawashita I. [Introduction of Tools Useful for Studies to Promote the Utilization of AI Technology]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2020; 76:1289-1295. [PMID: 33342948 DOI: 10.6009/jjrt.2020_jsrt_76.12.1289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Affiliation(s)
- Ikuo Kawashita
- Department of Clinical Radiology, Faculty of Health Sciences, Hiroshima International University
| |
Collapse
|
12
|
Lung Nodule: Imaging Features and Evaluation in the Age of Machine Learning. CURRENT PULMONOLOGY REPORTS 2019. [DOI: 10.1007/s13665-019-00229-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
13
|
A Novel Computer-Aided Diagnosis Scheme on Small Annotated Set: G2C-CAD. BIOMED RESEARCH INTERNATIONAL 2019; 2019:6425963. [PMID: 31119180 PMCID: PMC6500711 DOI: 10.1155/2019/6425963] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Accepted: 03/05/2019] [Indexed: 11/18/2022]
Abstract
Purpose Computer-aided diagnosis (CAD) can aid in improving diagnostic level; however, the main problem currently faced by CAD is that it cannot obtain sufficient labeled samples. To solve this problem, in this study, we adopt a generative adversarial network (GAN) approach and design a semisupervised learning algorithm, named G2C-CAD. Methods From the National Cancer Institute (NCI) Lung Image Database Consortium (LIDC) dataset, we extracted four types of pulmonary nodule sign images closely related to lung cancer: noncentral calcification, lobulation, spiculation, and nonsolid/ground-glass opacity (GGO) texture, obtaining a total of 3,196 samples. In addition, we randomly selected 2,000 non-lesion image blocks as negative samples. We split the data 90% for training and 10% for testing. We designed a DCGAN generative adversarial framework and trained it on the small sample set. We also trained our designed CNN-based fuzzy Co-forest on the labeled small sample set and obtained a preliminary classifier. Then, coupled with the simulated unlabeled samples generated by the trained DCGAN, we conducted iterative semisupervised learning, which continually improved the classification performance of the fuzzy Co-forest until the termination condition was reached. Finally, we tested the fuzzy Co-forest and compared its performance with that of a C4.5 random decision forest and the G2C-CAD system without the fuzzy scheme, using ROC and confusion matrix for evaluation. Results Four different types of lung cancer-related signs were used in the classification experiment: noncentral calcification, lobulation, spiculation, and nonsolid/ground-glass opacity (GGO) texture, along with negative image samples. For these five classes, the G2C-CAD system obtained AUCs of 0.946, 0.912, 0.908, 0.887, and 0.939, respectively. The average accuracy of G2C-CAD exceeded that of the C4.5 random decision tree by 14%. G2C-CAD also obtained promising test results on the LISS signs dataset; its AUCs for GGO, lobulation, spiculation, pleural indentation, and negative image samples were 0.972, 0.964, 0.941, 0.967, and 0.953, respectively. Conclusion The experimental results show that G2C-CAD is an appropriate method for addressing the problem of insufficient labeled samples in the medical image analysis field. Moreover, our system can be used to establish a training sample library for CAD classification diagnosis, which is important for future medical image analysis.
Collapse
|
14
|
What the radiologist should know about artificial intelligence - an ESR white paper. Insights Imaging 2019; 10:44. [PMID: 30949865 PMCID: PMC6449411 DOI: 10.1186/s13244-019-0738-2] [Citation(s) in RCA: 155] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Accepted: 03/20/2019] [Indexed: 02/08/2023] Open
Abstract
This paper aims to provide a review of the basis for application of AI in radiology, to discuss the immediate ethical and professional impact in radiology, and to consider possible future evolution.Even if AI does add significant value to image interpretation, there are implications outside the traditional radiology activities of lesion detection and characterisation. In radiomics, AI can foster the analysis of the features and help in the correlation with other omics data. Imaging biobanks would become a necessary infrastructure to organise and share the image data from which AI models can be trained. AI can be used as an optimising tool to assist the technologist and radiologist in choosing a personalised patient's protocol, tracking the patient's dose parameters, providing an estimate of the radiation risks. AI can also aid the reporting workflow and help the linking between words, images, and quantitative data. Finally, AI coupled with CDS can improve the decision process and thereby optimise clinical and radiological workflow.
Collapse
|
15
|
Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, Summers RM, Giger ML. Deep learning in medical imaging and radiation therapy. Med Phys 2018; 46:e1-e36. [PMID: 30367497 DOI: 10.1002/mp.13264] [Citation(s) in RCA: 364] [Impact Index Per Article: 60.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 09/18/2018] [Accepted: 10/09/2018] [Indexed: 12/15/2022] Open
Abstract
The goals of this review paper on deep learning (DL) in medical imaging and radiation therapy are to (a) summarize what has been achieved to date; (b) identify common and unique challenges, and strategies that researchers have taken to address these challenges; and (c) identify some of the promising avenues for the future both in terms of applications as well as technical innovations. We introduce the general principles of DL and convolutional neural networks, survey five major areas of application of DL in medical imaging and radiation therapy, identify common themes, discuss methods for dataset expansion, and conclude by summarizing lessons learned, remaining challenges, and future directions.
Collapse
Affiliation(s)
- Berkman Sahiner
- DIDSR/OSEL/CDRH U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Aria Pezeshk
- DIDSR/OSEL/CDRH U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | | | - Xiaosong Wang
- Imaging Biomarkers and Computer-aided Diagnosis Lab, Radiology and Imaging Sciences, NIH Clinical Center, Bethesda, MD, 20892-1182, USA
| | - Karen Drukker
- Department of Radiology, University of Chicago, Chicago, IL, 60637, USA
| | - Kenny H Cha
- DIDSR/OSEL/CDRH U.S. Food and Drug Administration, Silver Spring, MD, 20993, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-aided Diagnosis Lab, Radiology and Imaging Sciences, NIH Clinical Center, Bethesda, MD, 20892-1182, USA
| | - Maryellen L Giger
- Department of Radiology, University of Chicago, Chicago, IL, 60637, USA
| |
Collapse
|
16
|
Fazal MI, Patel ME, Tye J, Gupta Y. The past, present and future role of artificial intelligence in imaging. Eur J Radiol 2018; 105:246-250. [DOI: 10.1016/j.ejrad.2018.06.020] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Revised: 05/08/2018] [Accepted: 06/21/2018] [Indexed: 02/06/2023]
|
17
|
Rajkomar A, Lingam S, Taylor AG, Blum M, Mongan J. High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks. J Digit Imaging 2018; 30:95-101. [PMID: 27730417 PMCID: PMC5267603 DOI: 10.1007/s10278-016-9914-9] [Citation(s) in RCA: 69] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73–100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.
Collapse
Affiliation(s)
- Alvin Rajkomar
- Department of Medicine, Division of Hospital Medicine, University of California, San Francisco, 533 Parnassus Ave., Suite 127a, San Francisco, CA, 94143-0131, USA. .,Center for Digital Health Innovation, University of California, San Francisco, San Francisco, CA, USA.
| | - Sneha Lingam
- Center for Digital Health Innovation, University of California, San Francisco, San Francisco, CA, USA
| | - Andrew G Taylor
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Michael Blum
- Center for Digital Health Innovation, University of California, San Francisco, San Francisco, CA, USA
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA
| |
Collapse
|
18
|
Doi K, Giger ML, Nishikawa RM, Hoffmann KR, MacMahon H, Schmidt RA, Chua KG. Digital Radiography. Acta Radiol 2016. [DOI: 10.1177/028418519303400502] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
19
|
Pötter-Lang S, Schalekamp S, Schaefer-Prokop C, Uffmann M. [Detection of lung nodules. New opportunities in chest radiography]. Radiologe 2015; 54:455-61. [PMID: 24789046 DOI: 10.1007/s00117-013-2599-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
BACKGROUND Chest radiography still represents the most commonly performed X-ray examination because it is readily available, requires low radiation doses and is relatively inexpensive. However, as previously published, many initially undetected lung nodules are retrospectively visible in chest radiographs. STANDARD RADIOLOGICAL METHODS The great improvements in detector technology with the increasing dose efficiency and improved contrast resolution provide a better image quality and reduced dose needs. METHODICAL INNOVATIONS The dual energy acquisition technique and advanced image processing methods (e.g. digital bone subtraction and temporal subtraction) reduce the anatomical background noise by reduction of overlapping structures in chest radiography. Computer-aided detection (CAD) schemes increase the awareness of radiologists for suspicious areas. RESULTS The advanced image processing methods show clear improvements for the detection of pulmonary lung nodules in chest radiography and strengthen the role of this method in comparison to 3D acquisition techniques, such as computed tomography (CT). ASSESSMENT Many of these methods will probably be integrated into standard clinical treatment in the near future. Digital software solutions offer advantages as they can be easily incorporated into radiology departments and are often more affordable as compared to hardware solutions.
Collapse
Affiliation(s)
- S Pötter-Lang
- Universitätsklinik für Radiologie und Nuklearmedizin, Department of Biomedical Imaging and Image-Guided Therapy, Medizinische Universität Wien, Waehringer Guertel 18-20, 1090, Wien, Österreich,
| | | | | | | |
Collapse
|
20
|
CADe system integrated within the electronic health record. BIOMED RESEARCH INTERNATIONAL 2013; 2013:219407. [PMID: 24151586 PMCID: PMC3789292 DOI: 10.1155/2013/219407] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/26/2013] [Accepted: 08/10/2013] [Indexed: 11/18/2022]
Abstract
The latest technological advances and information support systems for clinics and hospitals produce a wide range of possibilities in the storage and retrieval of an ever-growing amount of clinical information as well as in detection and diagnosis. In this work, an Electronic Health Record (EHR) combined with a Computer Aided Detection (CADe) system for breast cancer diagnosis has been implemented. Our objective is to provide to radiologists a comprehensive working environment that facilitates the integration, the image visualization, and the use of aided tools within the EHR. For this reason, a development methodology based on hardware and software system features in addition to system requirements must be present during the whole development process. This will lead to a complete environment for displaying, editing, and reporting results not only for the patient information but also for their medical images in standardised formats such as DICOM and DICOM-SR. As a result, we obtain a CADe system which helps in detecting breast cancer using mammograms and is completely integrated into an EHR.
Collapse
|
21
|
Kim N, Choi J, Yi J, Choi S, Park S, Chang Y, Seo JB. An engineering view on megatrends in radiology: digitization to quantitative tools of medicine. Korean J Radiol 2013; 14:139-53. [PMID: 23482650 PMCID: PMC3590324 DOI: 10.3348/kjr.2013.14.2.139] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2012] [Accepted: 11/08/2012] [Indexed: 01/23/2023] Open
Abstract
Within six months of the discovery of X-ray in 1895, the technology was used to scan the interior of the human body, paving the way for many innovations in the field of medicine, including an ultrasound device in 1950, a CT scanner in 1972, and MRI in 1980. More recent decades have witnessed developments such as digital imaging using a picture archiving and communication system, computer-aided detection/diagnosis, organ-specific workstations, and molecular, functional, and quantitative imaging. One of the latest technical breakthrough in the field of radiology has been imaging genomics and robotic interventions for biopsy and theragnosis. This review provides an engineering perspective on these developments and several other megatrends in radiology.
Collapse
Affiliation(s)
- Namkug Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul 138-736, Korea.
| | | | | | | | | | | | | |
Collapse
|
22
|
Suzuki K. A review of computer-aided diagnosis in thoracic and colonic imaging. Quant Imaging Med Surg 2012; 2:163-76. [PMID: 23256078 DOI: 10.3978/j.issn.2223-4292.2012.09.02] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2012] [Accepted: 09/19/2012] [Indexed: 12/24/2022]
Abstract
Medical imaging has been indispensable in medicine since the discovery of x-rays. Medical imaging offers useful information on patients' medical conditions and on the causes of their symptoms and diseases. As imaging technologies advance, a large number of medical images are produced which physicians/radiologists must interpret. Thus, computer aids are demanded and become indispensable in physicians' decision making based on medical images. Consequently, computer-aided detection and diagnosis (CAD) has been investigated and has been an active research area in medical imaging. CAD is defined as detection and/or diagnosis made by a radiologist/physician who takes into account the computer output as a "second opinion". In CAD research, detection and diagnosis of lung and colorectal cancer in thoracic and colonic imaging constitute major areas, because lung and colorectal cancers are the leading and second leading causes, respectively, of cancer deaths in the U.S. and also in other countries. In this review, CAD of the thorax and colon, including CAD for detection and diagnosis of lung nodules in thoracic CT, and that for detection of polyps in CT colonography, are reviewed.
Collapse
Affiliation(s)
- Kenji Suzuki
- Department of Radiology, The University of Chicago, 5841 South Maryland Avenue, Chicago, IL 60637, USA
| |
Collapse
|
23
|
Shiraishi J, Li Q, Appelbaum D, Doi K. Computer-Aided Diagnosis and Artificial Intelligence in Clinical Imaging. Semin Nucl Med 2011; 41:449-62. [DOI: 10.1053/j.semnuclmed.2011.06.004] [Citation(s) in RCA: 120] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
24
|
Depeursinge A, Fischer B, Müller H, Deserno TM. Prototypes for content-based image retrieval in clinical practice. Open Med Inform J 2011; 5:58-72. [PMID: 21892374 PMCID: PMC3149811 DOI: 10.2174/1874431101105010058] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2011] [Revised: 05/20/2011] [Accepted: 05/20/2011] [Indexed: 02/07/2023] Open
Abstract
Content-based image retrieval (CBIR) has been proposed as key technology for computer-aided diagnostics (CAD). This paper reviews the state of the art and future challenges in CBIR for CAD applied to clinical practice.We define applicability to clinical practice by having recently demonstrated the CBIR system on one of the CAD demonstration workshops held at international conferences, such as SPIE Medical Imaging, CARS, SIIM, RSNA, and IEEE ISBI. From 2009 to 2011, the programs of CADdemo@CARS and the CAD Demonstration Workshop at SPIE Medical Imaging were sought for the key word "retrieval" in the title. The systems identified were analyzed and compared according to the hierarchy of gaps for CBIR systems.In total, 70 software demonstrations were analyzed. 5 systems were identified meeting the criterions. The fields of application are (i) bone age assessment, (ii) bone fractures, (iii) interstitial lung diseases, and (iv) mammography. Bridging the particular gaps of semantics, feature extraction, feature structure, and evaluation have been addressed most frequently.In specific application domains, CBIR technology is available for clinical practice. While system development has mainly focused on bridging content and feature gaps, performance and usability have become increasingly important. The evaluation must be based on a larger set of reference data, and workflow integration must be achieved before CBIR-CAD is really established in clinical practice.
Collapse
Affiliation(s)
- Adrien Depeursinge
- Business Information Systems, University of Applied Sciences Western Switzerland (HES–SO), TechnoArk 3, 3960 Sierre, Switzerland
- Service of Medical Informatics, University and University Hospitals of Geneva (HUG), Rue Gabrielle–Perret–Gentil 4,1211 Geneva 14, Switzerland
| | - Benedikt Fischer
- Department of Medical Informatics, RWTH Aachen University, Pauwelsstr. 30, D-52057 Aachen, Germany
| | - Henning Müller
- Business Information Systems, University of Applied Sciences Western Switzerland (HES–SO), TechnoArk 3, 3960 Sierre, Switzerland
- Service of Medical Informatics, University and University Hospitals of Geneva (HUG), Rue Gabrielle–Perret–Gentil 4,1211 Geneva 14, Switzerland
| | - Thomas M Deserno
- Department of Medical Informatics, RWTH Aachen University, Pauwelsstr. 30, D-52057 Aachen, Germany
| |
Collapse
|
25
|
Case-based lung image categorization and retrieval for interstitial lung diseases: clinical workflows. Int J Comput Assist Radiol Surg 2011; 7:97-110. [PMID: 21629982 DOI: 10.1007/s11548-011-0618-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2011] [Accepted: 05/09/2011] [Indexed: 10/18/2022]
Abstract
PURPOSE Clinical workflows and user interfaces of image-based computer-aided diagnosis (CAD) for interstitial lung diseases in high-resolution computed tomography are introduced and discussed. METHODS Three use cases are implemented to assist students, radiologists, and physicians in the diagnosis workup of interstitial lung diseases. RESULTS In a first step, the proposed system shows a three-dimensional map of categorized lung tissue patterns with quantification of the diseases based on texture analysis of the lung parenchyma. Then, based on the proportions of abnormal and normal lung tissue as well as clinical data of the patients, retrieval of similar cases is enabled using a multimodal distance aggregating content-based image retrieval (CBIR) and text-based information search. The global system leads to a hybrid detection-CBIR-based CAD, where detection-based and CBIR-based CAD show to be complementary both on the user's side and on the algorithmic side. CONCLUSIONS The proposed approach is in accordance with the classical workflow of clinicians searching for similar cases in textbooks and personal collections. The developed system enables objective and customizable inter-case similarity assessment, and the performance measures obtained with a leave-one-patient-out cross-validation (LOPO CV) are representative of a clinical usage of the system.
Collapse
|
26
|
Sugimoto K, Shiraishi J, Moriyasu F, Doi K. Computer-aided diagnosis for contrast-enhanced ultrasound in the liver. World J Radiol 2010; 2:215-23. [PMID: 21160633 PMCID: PMC2998841 DOI: 10.4329/wjr.v2.i6.215] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2010] [Revised: 05/06/2010] [Accepted: 05/13/2010] [Indexed: 02/06/2023] Open
Abstract
Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. The basic concept of CAD is to provide computer output as a second opinion to assist radiologists’ image interpretations by improving the accuracy and consistency of radiologic diagnosis and also by reducing the image-reading time. To date, research on CAD in ultrasound (US)-based diagnosis has been carried out mostly for breast lesions and has been limited in the fields of gastroenterology and hepatology, with most studies being conducted using B-mode US images. Two CAD schemes with contrast-enhanced US (CEUS) that are used in classifying focal liver lesions (FLLs) as liver metastasis, hemangioma, or three histologically differentiated types of hepatocellular carcinoma (HCC) are introduced in this article: one is based on physicians’ subjective pattern classifications (subjective analysis) and the other is a computerized scheme for classification of FLLs (quantitative analysis). Classification accuracies for FLLs for each CAD scheme were 84.8% and 88.5% for metastasis, 93.3% and 93.8% for hemangioma, and 98.6% and 86.9% for all HCCs, respectively. In addition, the classification accuracies for histologic differentiation of HCCs were 65.2% and 79.2% for well-differentiated HCCs, 41.7% and 50.0% for moderately differentiated HCCs, and 80.0% and 77.8% for poorly differentiated HCCs, respectively. There are a number of issues concerning the clinical application of CAD for CEUS, however, it is likely that CAD for CEUS of the liver will make great progress in the future.
Collapse
|
27
|
Development of pulmonary blood flow evaluation method with a dynamic flat-panel detector: quantitative correlation analysis with findings on perfusion scan. Radiol Phys Technol 2009; 3:40-5. [PMID: 20821100 DOI: 10.1007/s12194-009-0074-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2009] [Revised: 10/20/2009] [Accepted: 10/21/2009] [Indexed: 10/20/2022]
Abstract
Pulmonary blood flow is reflected in dynamic chest radiographs as changes in X-ray translucency, i.e., pixel values. Thus, decreased blood flow should be observed as a reduction of the variation of X-ray translucency. We performed the present study to investigate the feasibility of pulmonary blood flow evaluation with a dynamic flat-panel detector (FPD). Sequential chest radiographs of 14 subjects were obtained with a dynamic FPD system. The changes in pixel value in each local area were measured and mapped on the original image by use of a gray scale in which small and large changes were shown in white and black, respectively. The resulting images were compared to the findings in perfusion scans. The cross-correlation coefficients of the changes in pixel value and radioactivity counts in each local area were also computed. In all patients, pulmonary blood flow disorder was indicated as a reduction of changes in pixel values on the mapping image, and a correlation was observed between the distribution of changes in pixel value and those in radioactivity counts (0.7 <or= r, 3 cases; 0.4 <or= r < 0.7, 7 cases; 0.2 <or= r < 0.4, 4 cases). The results indicated that the distribution of changes in pixel value could provide a relative measure related to pulmonary blood flow. The present method is potentially useful for evaluating pulmonary blood flow as an additional examination in conventional chest radiography.
Collapse
|
28
|
Pulmonary blood flow evaluation using a dynamic flat-panel detector: feasibility study with pulmonary diseases. Int J Comput Assist Radiol Surg 2009; 4:449-55. [PMID: 20033527 DOI: 10.1007/s11548-009-0364-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2009] [Accepted: 05/11/2009] [Indexed: 10/20/2022]
Abstract
PURPOSE Pulmonary ventilation and circulation dynamics are reflected on fluoroscopic images as changes in X-ray translucency. The purpose of this study was to investigate the feasibility of non-contrast functional imaging using a dynamic flat-panel detector (FPD). METHODS Dynamic chest radiographs of 20 subjects (abnormal, n = 12; normal, n = 8) were obtained using the FPD system. Image analysis was performed to get qualitative perfusion mapping image; first, focal pixel value was defined. Second, lung area was determined and pulmonary hilar areas were eliminated. Third, one cardiac cycle was determined in each of the cases. Finally, total changes in pixel values during one cardiac cycle were calculated and their distributions were visualized with mapping on the original image. They were compared with the findings of lung perfusion scintigraphy. RESULTS In all normal controls, the total changes in pixel value in one cardiac cycle decreased from the hilar region to the peripheral region of the lung with left-right symmetric distribution. In contrast, in many abnormal cases, pulmonary blood flow disorder was indicated as a reduction of changes in pixel values on a mapping image. The findings of mapping image coincided with those of lung perfusion scintigraphy. CONCLUSIONS Dynamic chest radiography using an FPD system with computer analysis is expected to be a new type of functional imaging, which provides pulmonary blood flow distribution additionally.
Collapse
|
29
|
Computers, Conversation, Utilization, and Commoditization: The 2008 Herbert Abrams Lecture. AJR Am J Roentgenol 2009; 192:1375-81. [DOI: 10.2214/ajr.08.2063] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
30
|
Giger ML, Chan HP, Boone J. Anniversary paper: History and status of CAD and quantitative image analysis: the role of Medical Physics and AAPM. Med Phys 2009; 35:5799-820. [PMID: 19175137 PMCID: PMC2673617 DOI: 10.1118/1.3013555] [Citation(s) in RCA: 165] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023] Open
Abstract
The roles of physicists in medical imaging have expanded over the years, from the study of imaging systems (sources and detectors) and dose to the assessment of image quality and perception, the development of image processing techniques, and the development of image analysis methods to assist in detection and diagnosis. The latter is a natural extension of medical physicists' goals in developing imaging techniques to help physicians acquire diagnostic information and improve clinical decisions. Studies indicate that radiologists do not detect all abnormalities on images that are visible on retrospective review, and they do not always correctly characterize abnormalities that are found. Since the 1950s, the potential use of computers had been considered for analysis of radiographic abnormalities. In the mid-1980s, however, medical physicists and radiologists began major research efforts for computer-aided detection or computer-aided diagnosis (CAD), that is, using the computer output as an aid to radiologists-as opposed to a completely automatic computer interpretation-focusing initially on methods for the detection of lesions on chest radiographs and mammograms. Since then, extensive investigations of computerized image analysis for detection or diagnosis of abnormalities in a variety of 2D and 3D medical images have been conducted. The growth of CAD over the past 20 years has been tremendous-from the early days of time-consuming film digitization and CPU-intensive computations on a limited number of cases to its current status in which developed CAD approaches are evaluated rigorously on large clinically relevant databases. CAD research by medical physicists includes many aspects-collecting relevant normal and pathological cases; developing computer algorithms appropriate for the medical interpretation task including those for segmentation, feature extraction, and classifier design; developing methodology for assessing CAD performance; validating the algorithms using appropriate cases to measure performance and robustness; conducting observer studies with which to evaluate radiologists in the diagnostic task without and with the use of the computer aid; and ultimately assessing performance with a clinical trial. Medical physicists also have an important role in quantitative imaging, by validating the quantitative integrity of scanners and developing imaging techniques, and image analysis tools that extract quantitative data in a more accurate and automated fashion. As imaging systems become more complex and the need for better quantitative information from images grows, the future includes the combined research efforts from physicists working in CAD with those working on quantitative imaging systems to readily yield information on morphology, function, molecular structure, and more-from animal imaging research to clinical patient care. A historical review of CAD and a discussion of challenges for the future are presented here, along with the extension to quantitative image analysis.
Collapse
Affiliation(s)
- Maryellen L Giger
- Department of Radiology, University of Chicago, Chicago, Illinois 60637, USA.
| | | | | |
Collapse
|
31
|
Fujita H, Uchiyama Y, Nakagawa T, Fukuoka D, Hatanaka Y, Hara T, Lee GN, Hayashi Y, Ikedo Y, Gao X, Zhou X. Computer-aided diagnosis: the emerging of three CAD systems induced by Japanese health care needs. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2008; 92:238-48. [PMID: 18514362 DOI: 10.1016/j.cmpb.2008.04.003] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2007] [Revised: 03/24/2008] [Accepted: 04/15/2008] [Indexed: 05/16/2023]
Abstract
The aim of this paper is to describe three emerging computer-aided diagnosis (CAD) systems induced by Japanese health care needs. CAD has been developing fast in the last two decades. The idea of using a computer to help in medical image diagnosis is not new. Some pioneer studies are dated back to the 1960s. In 1998, the first U.S. FDA (Food and Drug Administration) approved commercial CAD system, a film-digitized mammography system, was launched by R2 Technologies, Inc. The success was quickly repeated by a number of companies. The approval of Medicare CAD reimbursement in the U.S. in 2001 further boosted the industry. Today, CAD has its significance in the economy of the medical industry. FDA approved CAD products in the field of breast imaging (mammography, ultrasonography and breast MRI) and chest imaging (radiography and CT) can be seen. In Japan, as part of the "Knowledge Cluster Initiative" of the government, three computer-aided diagnosis (CAD) projects are hosted at the Gifu University since 2004. These projects are regarding the development of CAD systems for the early detection of (1) cerebrovascular diseases using brain MRI and MRA images by detecting lacunar infarcts, unruptured aneurysms, and arterial occlusions; (2) ocular diseases such as glaucoma, diabetic retinopathy, and hypertensive retinopathy using retinal fundus images; and (3) breast cancers using ultrasound 3-D volumetric whole breast data by detecting the breast masses. The projects are entering their final development stage. Preliminary results are presented in this paper. Clinical examinations will be started soon, and commercialized CAD systems for the above subjects will appear by the completion of this project.
Collapse
Affiliation(s)
- Hiroshi Fujita
- Department of Intelligent Image Information, Division of Regeneration and Advanced Medical Sciences, Graduate School of Medicine, Gifu University, Gifu 501-1194, Japan
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
32
|
Doi K. Computer-aided diagnosis in medical imaging: historical review, current status and future potential. Comput Med Imaging Graph 2007; 31:198-211. [PMID: 17349778 PMCID: PMC1955762 DOI: 10.1016/j.compmedimag.2007.02.002] [Citation(s) in RCA: 699] [Impact Index Per Article: 41.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. In this article, the motivation and philosophy for early development of CAD schemes are presented together with the current status and future potential of CAD in a PACS environment. With CAD, radiologists use the computer output as a "second opinion" and make the final decisions. CAD is a concept established by taking into account equally the roles of physicians and computers, whereas automated computer diagnosis is a concept based on computer algorithms only. With CAD, the performance by computers does not have to be comparable to or better than that by physicians, but needs to be complementary to that by physicians. In fact, a large number of CAD systems have been employed for assisting physicians in the early detection of breast cancers on mammograms. A CAD scheme that makes use of lateral chest images has the potential to improve the overall performance in the detection of lung nodules when combined with another CAD scheme for PA chest images. Because vertebral fractures can be detected reliably by computer on lateral chest radiographs, radiologists' accuracy in the detection of vertebral fractures would be improved by the use of CAD, and thus early diagnosis of osteoporosis would become possible. In MRA, a CAD system has been developed for assisting radiologists in the detection of intracranial aneurysms. On successive bone scan images, a CAD scheme for detection of interval changes has been developed by use of temporal subtraction images. In the future, many CAD schemes could be assembled as packages and implemented as a part of PACS. For example, the package for chest CAD may include the computerized detection of lung nodules, interstitial opacities, cardiomegaly, vertebral fractures, and interval changes in chest radiographs as well as the computerized classification of benign and malignant nodules and the differential diagnosis of interstitial lung diseases. In order to assist in the differential diagnosis, it would be possible to search for and retrieve images (or lesions) with known pathology, which would be very similar to a new unknown case, from PACS when a reliable and useful method has been developed for quantifying the similarity of a pair of images for visual comparison by radiologists.
Collapse
Affiliation(s)
- Kunio Doi
- Kurt Rossmann Laboratories for Radiologic Image Research, Department of Radiology, The University of Chicago, 5841 South Maryland Avenue, Chicago, IL 60637, USA.
| |
Collapse
|
33
|
Doi K. Current status and future potential of computer-aided diagnosis in medical imaging. Br J Radiol 2005; 78 Spec No 1:S3-S19. [PMID: 15917443 DOI: 10.1259/bjr/82933343] [Citation(s) in RCA: 154] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. The basic concept of CAD is to provide a computer output as a second opinion to assist radiologists' image interpretation by improving the accuracy and consistency of radiological diagnosis and also by reducing the image reading time. In this article, a number of CAD schemes are presented, with emphasis on potential clinical applications. These schemes include: (1) detection and classification of lung nodules on digital chest radiographs; (2) detection of nodules in low dose CT; (3) distinction between benign and malignant nodules on high resolution CT; (4) usefulness of similar images for distinction between benign and malignant lesions; (5) quantitative analysis of diffuse lung diseases on high resolution CT; and (6) detection of intracranial aneurysms in magnetic resonance angiography. Because CAD can be applied to all imaging modalities, all body parts and all kinds of examinations, it is likely that CAD will have a major impact on medical imaging and diagnostic radiology in the 21st century.
Collapse
Affiliation(s)
- K Doi
- Kurt Rossmann Laboratories for Radiologic Image Research, Department of Radiology, The University of Chicago, 5841 South Maryland, MC 2026, Chicago, IL 60637, USA
| |
Collapse
|
34
|
Abstract
Computer-aided diagnosis (CAD) has become a practical clinical approach in diagnostic radiology, although at present only in the area of detection of breast cancer in mammograms. Current research efforts have been focused on detection and classification of images of many different types of lesions in a number of organs, obtained with various imaging modalities. It is likely that the present results of CAD are only at the tip of the iceberg. Although automated computer diagnosis is a concept based on computer algorithms only, CAD is a concept established by taking into account equally the roles of physicians and computers. The effect of CAD on differential diagnosis has already indicated that the performance level is high, and that CAD would be ready for clinical trials and commercialization efforts. The presentation of images similar to those of an unknown case may be useful as a supplemental tool for CAD in the differential diagnosis.
Collapse
Affiliation(s)
- Kunio Doi
- Kurt Rossmann Laboratories for Radiologic Image Research, Department of Radiology, The University of Chicago, Chicago, Illinois 60637, USA.
| |
Collapse
|
35
|
Fujita H. [Studies on computer-aided diagnosis (CAD) from the past to the future]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2003; 59:1327-37. [PMID: 14983111 DOI: 10.6009/jjrt.kj00003174055] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/29/2023]
Affiliation(s)
- Hiroshi Fujita
- Department of Intelligent Image Information, Division of Regeneration and Advanced Medical Science, Graduate School of Medicine, Gifu University
| |
Collapse
|
36
|
Stein MA, Winter J. Theory development in medical decision-making. INTERNATIONAL JOURNAL OF BIO-MEDICAL COMPUTING 1974; 5:147-59. [PMID: 4602304 DOI: 10.1016/0020-7101(74)90016-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
|
37
|
|
38
|
Eaves GN. Image processing in the biomedical sciences. COMPUTERS AND BIOMEDICAL RESEARCH, AN INTERNATIONAL JOURNAL 1967; 1:112-23. [PMID: 5602831 DOI: 10.1016/0010-4809(67)90010-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
39
|
Lodwick GS, Turner AH, Lusted LB, Templeton AW. Computer-aided analysis of radiographic images. JOURNAL OF CHRONIC DISEASES 1966; 19:485-96. [PMID: 4895057 DOI: 10.1016/0021-9681(66)90122-6] [Citation(s) in RCA: 26] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
40
|
|