1
|
Shao M, Byrd DW, Mitra J, Behnia F, Lee JH, Iravani A, Sadic M, Chen DL, Wollenweber SD, Abbey CK, Kinahan PE, Ahn S. A deep learning anthropomorphic model observer for a detection task in PET. Med Phys 2024; 51:7093-7107. [PMID: 39008812 DOI: 10.1002/mp.17303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 05/15/2024] [Accepted: 06/24/2024] [Indexed: 07/17/2024] Open
Abstract
BACKGROUND Lesion detection is one of the most important clinical tasks in positron emission tomography (PET) for oncology. An anthropomorphic model observer (MO) designed to replicate human observers (HOs) in a detection task is an important tool for assessing task-based image quality. The channelized Hotelling observer (CHO) has been the most popular anthropomorphic MO. Recently, deep learning MOs (DLMOs), mostly based on convolutional neural networks (CNNs), have been investigated for various imaging modalities. However, there have been few studies on DLMOs for PET. PURPOSE The goal of the study is to investigate whether DLMOs can predict HOs better than conventional MOs such as CHO in a two-alternative forced-choice (2AFC) detection task using PET images with real anatomical variability. METHODS Two types of DLMOs were implemented: (1) CNN DLMO, and (2) CNN-SwinT DLMO that combines CNN and Swin Transformer (SwinT) encoders. Lesion-absent PET images were reconstructed from clinical data, and lesion-present images were reconstructed with adding simulated lesion sinogram data. Lesion-present and lesion-absent PET image pairs were labeled by eight HOs consisting of four radiologists and four image scientists in a 2AFC detection task. In total, 2268 pairs of lesion-present and lesion-absent images were used for training, 324 pairs for validation, and 324 pairs for test. CNN DLMO, CNN-SwinT DLMO, CHO with internal noise, and non-prewhitening matched filter (NPWMF) were compared in the same train-test paradigm. For comparison, six quantitative metrics including prediction accuracy, mean squared errors (MSEs) and correlation coefficients, which measure how well a MO predicts HOs, were calculated in a 9-fold cross-validation experiment. RESULTS In terms of the accuracy and MSE metrics, CNN DLMO and CNN-SwinT DLMO showed better performance than CHO and NPWMF, and CNN-SwinT DLMO showed the best performance among the MOs evaluated. CONCLUSIONS DLMO can predict HOs more accurately than conventional MOs such as CHO in PET lesion detection. Combining SwinT and CNN encoders can improve the DLMO prediction performance compared to using CNN only.
Collapse
Affiliation(s)
- Muhan Shao
- GE HealthCare Technology and Innovation Center, Niskayuna, New York, USA
| | - Darrin W Byrd
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Jhimli Mitra
- GE HealthCare Technology and Innovation Center, Niskayuna, New York, USA
| | - Fatemeh Behnia
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Jean H Lee
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Amir Iravani
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Murat Sadic
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Delphine L Chen
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | | | - Craig K Abbey
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, California, USA
| | - Paul E Kinahan
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Sangtae Ahn
- GE HealthCare Technology and Innovation Center, Niskayuna, New York, USA
| |
Collapse
|
2
|
Rahman MF, Tseng TL(B, Pokojovy M, McCaffrey P, Walser E, Moen S, Vo A, Ho JC. Machine-Learning-Enabled Diagnostics with Improved Visualization of Disease Lesions in Chest X-ray Images. Diagnostics (Basel) 2024; 14:1699. [PMID: 39202188 PMCID: PMC11353848 DOI: 10.3390/diagnostics14161699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2024] [Revised: 07/31/2024] [Accepted: 08/02/2024] [Indexed: 09/03/2024] Open
Abstract
The class activation map (CAM) represents the neural-network-derived region of interest, which can help clarify the mechanism of the convolutional neural network's determination of any class of interest. In medical imaging, it can help medical practitioners diagnose diseases like COVID-19 or pneumonia by highlighting the suspicious regions in Computational Tomography (CT) or chest X-ray (CXR) film. Many contemporary deep learning techniques only focus on COVID-19 classification tasks using CXRs, while few attempt to make it explainable with a saliency map. To fill this research gap, we first propose a VGG-16-architecture-based deep learning approach in combination with image enhancement, segmentation-based region of interest (ROI) cropping, and data augmentation steps to enhance classification accuracy. Later, a multi-layer Gradient CAM (ML-Grad-CAM) algorithm is integrated to generate a class-specific saliency map for improved visualization in CXR images. We also define and calculate a Severity Assessment Index (SAI) from the saliency map to quantitatively measure infection severity. The trained model achieved an accuracy score of 96.44% for the three-class CXR classification task, i.e., COVID-19, pneumonia, and normal (healthy patients), outperforming many existing techniques in the literature. The saliency maps generated from the proposed ML-GRAD-CAM algorithm are compared with the original Gran-CAM algorithm.
Collapse
Affiliation(s)
- Md Fashiar Rahman
- Department of Industrial, Manufacturing and Systems Engineering, The University of Texas, El Paso, TX 79968, USA
| | - Tzu-Liang (Bill) Tseng
- Department of Industrial, Manufacturing and Systems Engineering, The University of Texas, El Paso, TX 79968, USA
| | - Michael Pokojovy
- Department of Mathematics and Statistics, Old Dominion University, Norfolk, VA 23529, USA;
| | - Peter McCaffrey
- Department of Radiology, The University of Texas Medical Branch, Galveston, TX 77550, USA; (P.M.); (E.W.); (S.M.); (A.V.)
| | - Eric Walser
- Department of Radiology, The University of Texas Medical Branch, Galveston, TX 77550, USA; (P.M.); (E.W.); (S.M.); (A.V.)
| | - Scott Moen
- Department of Radiology, The University of Texas Medical Branch, Galveston, TX 77550, USA; (P.M.); (E.W.); (S.M.); (A.V.)
| | - Alex Vo
- Department of Radiology, The University of Texas Medical Branch, Galveston, TX 77550, USA; (P.M.); (E.W.); (S.M.); (A.V.)
| | - Johnny C. Ho
- Department of Management and Marketing, Turner College of Business, Columbus State University, Columbus, GA 31907, USA;
| |
Collapse
|
3
|
Artesani A, Bruno A, Gelardi F, Chiti A. Empowering PET: harnessing deep learning for improved clinical insight. Eur Radiol Exp 2024; 8:17. [PMID: 38321340 PMCID: PMC10847083 DOI: 10.1186/s41747-023-00413-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 11/20/2023] [Indexed: 02/08/2024] Open
Abstract
This review aims to take a journey into the transformative impact of artificial intelligence (AI) on positron emission tomography (PET) imaging. To this scope, a broad overview of AI applications in the field of nuclear medicine and a thorough exploration of deep learning (DL) implementations in cancer diagnosis and therapy through PET imaging will be presented. We firstly describe the behind-the-scenes use of AI for image generation, including acquisition (event positioning, noise reduction though time-of-flight estimation and scatter correction), reconstruction (data-driven and model-driven approaches), restoration (supervised and unsupervised methods), and motion correction. Thereafter, we outline the integration of AI into clinical practice through the applications to segmentation, detection and classification, quantification, treatment planning, dosimetry, and radiomics/radiogenomics combined to tumour biological characteristics. Thus, this review seeks to showcase the overarching transformation of the field, ultimately leading to tangible improvements in patient treatment and response assessment. Finally, limitations and ethical considerations of the AI application to PET imaging and future directions of multimodal data mining in this discipline will be briefly discussed, including pressing challenges to the adoption of AI in molecular imaging such as the access to and interoperability of huge amount of data as well as the "black-box" problem, contributing to the ongoing dialogue on the transformative potential of AI in nuclear medicine.Relevance statementAI is rapidly revolutionising the world of medicine, including the fields of radiology and nuclear medicine. In the near future, AI will be used to support healthcare professionals. These advances will lead to improvements in diagnosis, in the assessment of response to treatment, in clinical decision making and in patient management.Key points• Applying AI has the potential to enhance the entire PET imaging pipeline.• AI may support several clinical tasks in both PET diagnosis and prognosis.• Interpreting the relationships between imaging and multiomics data will heavily rely on AI.
Collapse
Affiliation(s)
- Alessia Artesani
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Milan, Pieve Emanuele, 20090, Italy
| | - Alessandro Bruno
- Department of Business, Law, Economics and Consumer Behaviour "Carlo A. Ricciardi", IULM Libera Università Di Lingue E Comunicazione, Via P. Filargo 38, Milan, 20143, Italy
| | - Fabrizia Gelardi
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Milan, Pieve Emanuele, 20090, Italy.
- Vita-Salute San Raffaele University, Via Olgettina 58, Milan, 20132, Italy.
| | - Arturo Chiti
- Vita-Salute San Raffaele University, Via Olgettina 58, Milan, 20132, Italy
- Department of Nuclear Medicine, IRCCS Ospedale San Raffaele, Via Olgettina 60, Milan, 20132, Italy
| |
Collapse
|
4
|
Shi J, Bendig D, Vollmar HC, Rasche P. Mapping the Bibliometrics Landscape of AI in Medicine: Methodological Study. J Med Internet Res 2023; 25:e45815. [PMID: 38064255 PMCID: PMC10746970 DOI: 10.2196/45815] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 08/16/2023] [Accepted: 09/30/2023] [Indexed: 12/18/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI), conceived in the 1950s, has permeated numerous industries, intensifying in tandem with advancements in computing power. Despite the widespread adoption of AI, its integration into medicine trails other sectors. However, medical AI research has experienced substantial growth, attracting considerable attention from researchers and practitioners. OBJECTIVE In the absence of an existing framework, this study aims to outline the current landscape of medical AI research and provide insights into its future developments by examining all AI-related studies within PubMed over the past 2 decades. We also propose potential data acquisition and analysis methods, developed using Python (version 3.11) and to be executed in Spyder IDE (version 5.4.3), for future analogous research. METHODS Our dual-pronged approach involved (1) retrieving publication metadata related to AI from PubMed (spanning 2000-2022) via Python, including titles, abstracts, authors, journals, country, and publishing years, followed by keyword frequency analysis and (2) classifying relevant topics using latent Dirichlet allocation, an unsupervised machine learning approach, and defining the research scope of AI in medicine. In the absence of a universal medical AI taxonomy, we used an AI dictionary based on the European Commission Joint Research Centre AI Watch report, which emphasizes 8 domains: reasoning, planning, learning, perception, communication, integration and interaction, service, and AI ethics and philosophy. RESULTS From 2000 to 2022, a comprehensive analysis of 307,701 AI-related publications from PubMed highlighted a 36-fold increase. The United States emerged as a clear frontrunner, producing 68,502 of these articles. Despite its substantial contribution in terms of volume, China lagged in terms of citation impact. Diving into specific AI domains, as the Joint Research Centre AI Watch report categorized, the learning domain emerged dominant. Our classification analysis meticulously traced the nuanced research trajectories across each domain, revealing the multifaceted and evolving nature of AI's application in the realm of medicine. CONCLUSIONS The research topics have evolved as the volume of AI studies increases annually. Machine learning remains central to medical AI research, with deep learning expected to maintain its fundamental role. Empowered by predictive algorithms, pattern recognition, and imaging analysis capabilities, the future of AI research in medicine is anticipated to concentrate on medical diagnosis, robotic intervention, and disease management. Our topic modeling outcomes provide a clear insight into the focus of AI research in medicine over the past decades and lay the groundwork for predicting future directions. The domains that have attracted considerable research attention, primarily the learning domain, will continue to shape the trajectory of AI in medicine. Given the observed growing interest, the domain of AI ethics and philosophy also stands out as a prospective area of increased focus.
Collapse
Affiliation(s)
- Jin Shi
- Institute for Entrepreneurship, University of Münster, Münster, Germany
| | - David Bendig
- Institute for Entrepreneurship, University of Münster, Münster, Germany
| | | | - Peter Rasche
- Department of Healthcare, University of Applied Science - Hochschule Niederrhein, Krefeld, Germany
| |
Collapse
|
5
|
Solomon J, Bender S, Durgempudi P, Robar C, Cocchiaro M, Turner S, Watson C, Healy J, Spake A, Szlosek D. Diagnostic validation of vertebral heart score machine learning algorithm for canine lateral chest radiographs. J Small Anim Pract 2023; 64:769-775. [PMID: 37622992 DOI: 10.1111/jsap.13666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 04/26/2023] [Accepted: 07/12/2023] [Indexed: 08/26/2023]
Abstract
OBJECTIVES The vertebral heart score is a measurement used to index heart size relative to thoracic vertebra. Vertebral heart score can be a useful tool for identifying and staging heart disease and providing prognostic information. The purpose of this study is to validate the use of a vertebral heart score algorithm compared to manual vertebral heart scoring by three board-certified veterinary cardiologists. MATERIALS AND METHODS A convolutional neural network centred around semantic segmentation of relevant anatomical features was developed to predict heart size and vertebral bodies. These predictions were used to calculate the vertebral heart score. An external validation study consisting of 1200 canine lateral radiographs was randomly selected to match the underlying distribution of vertebral heart scores. Three American College of Veterinary Internal Medicine board-certified cardiologists were enrolled to manually score 400 images each using the traditional Buchanan method. Post-scoring, the cardiologists evaluated the algorithm for misaligned anatomic landmarks and overall image quality. RESULTS The 95th percentile absolute difference between the cardiologist vertebral heart score and the algorithm vertebral heart score was 1.05 vertebrae (95% confidence interval: 0.97 to 1.20 vertebrae) with a mean bias of -0.09 vertebrae (95% confidence interval: -0.12 to -0.05 vertebrae). In addition, the model was observed to be well calibrated across the predictive range. CLINICAL SIGNIFICANCE We have found the performance of the vertebral heart score algorithm comparable to three board-certified cardiologists. While validation of this vertebral heart score algorithm has shown strong performance compared to veterinarians, further external validation in other clinical settings is warranted before use in those settings.
Collapse
Affiliation(s)
- J Solomon
- IDEXX Laboratories, Inc., Westbrook, ME, USA
| | - S Bender
- IDEXX Laboratories, Inc., Westbrook, ME, USA
| | | | - C Robar
- IDEXX Laboratories, Inc., Westbrook, ME, USA
| | - M Cocchiaro
- IDEXX Laboratories, Inc., Westbrook, ME, USA
| | - S Turner
- IDEXX Laboratories, Inc., Westbrook, ME, USA
| | - C Watson
- IDEXX Laboratories, Inc., Westbrook, ME, USA
| | - J Healy
- IDEXX Laboratories, Inc., Westbrook, ME, USA
| | - A Spake
- IDEXX Laboratories, Inc., Westbrook, ME, USA
| | - D Szlosek
- IDEXX Laboratories, Inc., Westbrook, ME, USA
| |
Collapse
|
6
|
Ackerson BG, Sperduto W, D'Anna R, Niedzwiecki D, Christensen J, Patel P, Mullikin TC, Kelsey CR. Divergent Interpretations of Imaging After Stereotactic Body Radiation Therapy for Lung Cancer. Pract Radiat Oncol 2023; 13:e126-e133. [PMID: 36375770 DOI: 10.1016/j.prro.2022.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 09/19/2022] [Accepted: 09/24/2022] [Indexed: 11/13/2022]
Abstract
PURPOSE Conflicting information from health care providers contributes to anxiety among cancer patients. The purpose of this study was to investigate discordant interpretations of follow-up imaging studies after lung stereotactic body radiation therapy (SBRT) between radiologists and radiation oncologists. METHODS AND MATERIALS Patients treated with SBRT for stage I non-small cell lung cancer from 2007 to 2018 at Duke University Medical Center were included. Radiology interpretations of follow-up computed tomography (CT) chest or positron emission tomography (PET)/CT scans and the corresponding radiation oncology interpretations in follow-up notes from the medical record were assessed. Based on language used, interpretations were scored as concerning for progression (Progression), neutral differential listed (Neutral Differential), or favor stability/postradiation changes (Stable). Neutral Differential required that malignancy was specifically listed as a possibility in the differential. Encounters were categorized as discordant when either radiology or radiation oncology interpreted the surveillance imaging as Progression when the other interpreted the imaging study as Stable or Neutral Differential. The incidence of discordant interpretations was the primary endpoint of the study. RESULTS From 2007 to 2018, 139 patients were treated with SBRT and had available follow-up CT or PET-CT imaging for the analysis. Median follow-up was 61 months and the median number of follow-up encounters per patient was 3. Of 534 encounters evaluated, 25 (4.7%) had overtly discordant interpretations of imaging studies. This most commonly arose when radiology felt the imaging study showed Progression but radiation oncology favored Stable or Neutral Differential (24/25, 96%). No patient or treatment variables were found to be significantly associated with discordant interpretations on univariate analysis including type of scan (CT 22/489, 4.5%; PET-CT 3/45, 7%; P = .46). CONCLUSIONS Surveillance imaging after lung SBRT is often interpreted differently by radiologists and radiation oncologists, but overt discordance was relatively low at our institution. Providers should be aware of differences in interpretation patterns that may contribute to increased patient distress.
Collapse
Affiliation(s)
- Bradley G Ackerson
- Departments of Radiation Oncology, Duke University Medical Center, Durham, North Carolina.
| | - William Sperduto
- Departments of Radiation Oncology, Duke University Medical Center, Durham, North Carolina
| | - Rachel D'Anna
- Biostatistics and Bioinformatics, Duke University Medical Center, Durham, North Carolina
| | - Donna Niedzwiecki
- Biostatistics and Bioinformatics, Duke University Medical Center, Durham, North Carolina
| | - Jared Christensen
- Division of Cardiothoracic Imaging, Department of Radiology, Duke University Medical Center, Durham, North Carolina
| | - Pranalee Patel
- Departments of Radiation Oncology, Duke University Medical Center, Durham, North Carolina
| | - Trey C Mullikin
- Departments of Radiation Oncology, Duke University Medical Center, Durham, North Carolina
| | - Chris R Kelsey
- Departments of Radiation Oncology, Duke University Medical Center, Durham, North Carolina
| |
Collapse
|
7
|
Goyal V, Read AT, Brown DM, Brawer L, Bateh K, Hannon BG, Feola AJ, Ethier CR. Morphometric Analysis of Retinal Ganglion Cell Axons in Normal and Glaucomatous Brown Norway Rats Optic Nerves. Transl Vis Sci Technol 2023; 12:8. [PMID: 36917118 PMCID: PMC10020949 DOI: 10.1167/tvst.12.3.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 11/23/2022] [Indexed: 03/15/2023] Open
Abstract
Purpose A reference atlas of optic nerve (ON) retinal ganglion cell (RGC) axons could facilitate studies of neuro-ophthalmic diseases by detecting subtle RGC axonal changes. Here we construct an RGC axonal atlas for normotensive eyes in Brown Norway rats, widely used in glaucoma research, and also develop/evaluate several novel metrics of axonal damage in hypertensive eyes. Methods Light micrographs of entire ON cross-sections from hypertensive and normotensive eyes were processed through a deep learning-based algorithm, AxoNet2.0, to determine axonal morphological properties and were semiquantitatively scored using the Morrison grading scale (MGS) to provide a damage score independent of AxoNet2.0 outcomes. To construct atlases, ONs were conformally mapped onto an ON "template," and axonal morphometric data was computed for each region. We also developed damage metrics based on myelin morphometry. Results In normotensive eyes, average axon density was ∼0.3 axons/µm2 (i.e., ∼80,000 axons in an ON). We measured axoplasm diameter, eccentricity, cross-sectional area, and myelin g-ratio and thickness. Most morphological parameters exhibited a wide range of coefficients of variation (CoV); however, myelin thickness CoV was only ∼2% in normotensive eyes. In hypertensive eyes, increased myelin thickness correlated strongly with MGS (P < 0.0001). Conclusions We present the first comprehensive normative RGC axon morphometric atlas for Brown Norway rat eyes. We suggest objective, repeatable damage metrics based on RGC axon myelin thickness for hypertensive eyes. Translational Relevance These tools can evaluate regional changes in RGCs and overall levels of damage in glaucoma studies using Brown Norway rats.
Collapse
Affiliation(s)
- Vidisha Goyal
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - A. Thomas Read
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Dillon M. Brown
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Luke Brawer
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Kaitlyn Bateh
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Bailey G. Hannon
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, USA
| | - Andrew J. Feola
- Center for Visual and Neurocognitive Rehabilitation, Atlanta VA Healthcare System, Decatur, GA, USA
- Department of Ophthalmology, Emory University, Atlanta, GA, USA
| | - C. Ross Ethier
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA, USA
- Department of Ophthalmology, Emory University, Atlanta, GA, USA
| |
Collapse
|
8
|
Artificial Intelligence (AI) in Breast Imaging: A Scientometric Umbrella Review. Diagnostics (Basel) 2022; 12:diagnostics12123111. [PMID: 36553119 PMCID: PMC9777253 DOI: 10.3390/diagnostics12123111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/07/2022] [Accepted: 12/08/2022] [Indexed: 12/14/2022] Open
Abstract
Artificial intelligence (AI), a rousing advancement disrupting a wide spectrum of applications with remarkable betterment, has continued to gain momentum over the past decades. Within breast imaging, AI, especially machine learning and deep learning, honed with unlimited cross-data/case referencing, has found great utility encompassing four facets: screening and detection, diagnosis, disease monitoring, and data management as a whole. Over the years, breast cancer has been the apex of the cancer cumulative risk ranking for women across the six continents, existing in variegated forms and offering a complicated context in medical decisions. Realizing the ever-increasing demand for quality healthcare, contemporary AI has been envisioned to make great strides in clinical data management and perception, with the capability to detect indeterminate significance, predict prognostication, and correlate available data into a meaningful clinical endpoint. Here, the authors captured the review works over the past decades, focusing on AI in breast imaging, and systematized the included works into one usable document, which is termed an umbrella review. The present study aims to provide a panoramic view of how AI is poised to enhance breast imaging procedures. Evidence-based scientometric analysis was performed in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guideline, resulting in 71 included review works. This study aims to synthesize, collate, and correlate the included review works, thereby identifying the patterns, trends, quality, and types of the included works, captured by the structured search strategy. The present study is intended to serve as a "one-stop center" synthesis and provide a holistic bird's eye view to readers, ranging from newcomers to existing researchers and relevant stakeholders, on the topic of interest.
Collapse
|
9
|
Hornung AL, Hornung CM, Mallow GM, Barajas JN, Rush A, Sayari AJ, Galbusera F, Wilke HJ, Colman M, Phillips FM, An HS, Samartzis D. Artificial intelligence in spine care: current applications and future utility. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2022; 31:2057-2081. [PMID: 35347425 DOI: 10.1007/s00586-022-07176-0] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 01/18/2022] [Accepted: 03/08/2022] [Indexed: 01/20/2023]
Abstract
PURPOSE The field of artificial intelligence is ever growing and the applications of machine learning in spine care are continuously advancing. Given the advent of the intelligence-based spine care model, understanding the evolution of computation as it applies to diagnosis, treatment, and adverse event prediction is of great importance. Therefore, the current review sought to synthesize findings from the literature at the interface of artificial intelligence and spine research. METHODS A narrative review was performed based on the literature of three databases (MEDLINE, CINAHL, and Scopus) from January 2015 to March 2021 that examined historical and recent advancements in the understanding of artificial intelligence and machine learning in spine research. Studies were appraised for their role in, or description of, advancements within image recognition and predictive modeling for spinal research. Only English articles that fulfilled inclusion criteria were ultimately incorporated in this review. RESULTS This review briefly summarizes the history and applications of artificial intelligence and machine learning in spine. Three basic machine learning training paradigms: supervised learning, unsupervised learning, and reinforced learning are also discussed. Artificial intelligence and machine learning have been utilized in almost every facet of spine ranging from localization and segmentation techniques in spinal imaging to pathology specific algorithms which include but not limited to; preoperative risk assessment of postoperative complications, screening algorithms for patients at risk of osteoporosis and clustering analysis to identify subgroups within adolescent idiopathic scoliosis. The future of artificial intelligence and machine learning in spine surgery is also discussed with focusing on novel algorithms, data collection techniques and increased utilization of automated systems. CONCLUSION Improvements to modern-day computing and accessibility to various imaging modalities allow for innovative discoveries that may arise, for example, from management. Given the imminent future of AI in spine surgery, it is of great importance that practitioners continue to inform themselves regarding AI, its goals, use, and progression. In the future, it will be critical for the spine specialist to be able to discern the utility of novel AI research, particularly as it continues to pervade facets of everyday spine surgery.
Collapse
Affiliation(s)
- Alexander L Hornung
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, IL, USA
| | | | - G Michael Mallow
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, IL, USA
| | - J Nicolás Barajas
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, IL, USA
| | - Augustus Rush
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, IL, USA
| | - Arash J Sayari
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, IL, USA
| | | | - Hans-Joachim Wilke
- Institute of Orthopaedic Research and Biomechanics, Trauma Research Center Ulm, Ulm University, Ulm, Germany
| | - Matthew Colman
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, IL, USA
| | - Frank M Phillips
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, IL, USA
| | - Howard S An
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, IL, USA
| | - Dino Samartzis
- Department of Orthopaedic Surgery, Rush University Medical Center, Chicago, IL, USA.
| |
Collapse
|
10
|
Yun HR, Huh CW, Jung DH, Lee G, Son NH, Kim JH, Youn YH, Park JC, Shin SK, Lee SK, Lee YC. Machine Learning Improves the Prediction Rate of Non-Curative Resection of Endoscopic Submucosal Dissection in Patients with Early Gastric Cancer. Cancers (Basel) 2022; 14:cancers14153742. [PMID: 35954406 PMCID: PMC9367410 DOI: 10.3390/cancers14153742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 07/25/2022] [Accepted: 07/29/2022] [Indexed: 02/01/2023] Open
Abstract
Non-curative resection (NCR) of early gastric cancer (EGC) after endoscopic submucosal dissection (ESD) can increase the burden of additional treatment and medical expenses. We aimed to develop a machine-learning (ML)-based NCR prediction model for EGC prior to ESD. We obtained data from 4927 patients with EGC who underwent ESD between January 2006 and February 2020. Ten clinicopathological characteristics were selected using extreme gradient boosting (XGBoost) and were used to develop a ML-based model. Dataset was divided into the training and internal validation sets and verified using an external validation set. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) were evaluated. The performance of each model was compared by using the Delong test. A total of 1100 (22.1%) patients were identified as being treated non-curatively with ESD. Seven ML-based NCR prediction models were developed. The performance of NCR prediction was highest in the XGBoost model (AUROC, 0.851; 95% confidence interval, 0.837–0.864). When we compared the prediction performance by the Delong test, XGBoost (p = 0.02) and support vector machine (p = 0.02) models showed a significantly higher performance among the NCR prediction models. We developed an ML model capable of accurately predicting the NCR of EGC before ESD. This ML model can provide useful information for decision-making regarding the appropriate treatment of EGC before ESD.
Collapse
Affiliation(s)
- Hae-Ryong Yun
- Department of Internal Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Seoul 16995, Korea;
| | - Cheal Wung Huh
- Department of Internal Medicine, Yongin Severance Hospital, Yonsei University College of Medicine, Seoul 16995, Korea;
- Correspondence: (C.W.H.); (D.H.J.)
| | - Da Hyun Jung
- Department of Internal Medicine, Severance Hospital, Yonsei University College of Medicine, Seoul 16995, Korea; (J.C.P.); (S.K.S.); (S.K.L.); (Y.C.L.)
- Correspondence: (C.W.H.); (D.H.J.)
| | - Gyubok Lee
- Graduate School of AI, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Korea;
| | - Nak-Hoon Son
- Department of Statistics, Keimyung University, Daegu 42601, Korea;
| | - Jie-Hyun Kim
- Department of Internal Medicine, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul 16995, Korea; (J.-H.K.); (Y.H.Y.)
| | - Young Hoon Youn
- Department of Internal Medicine, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul 16995, Korea; (J.-H.K.); (Y.H.Y.)
| | - Jun Chul Park
- Department of Internal Medicine, Severance Hospital, Yonsei University College of Medicine, Seoul 16995, Korea; (J.C.P.); (S.K.S.); (S.K.L.); (Y.C.L.)
| | - Sung Kwan Shin
- Department of Internal Medicine, Severance Hospital, Yonsei University College of Medicine, Seoul 16995, Korea; (J.C.P.); (S.K.S.); (S.K.L.); (Y.C.L.)
| | - Sang Kil Lee
- Department of Internal Medicine, Severance Hospital, Yonsei University College of Medicine, Seoul 16995, Korea; (J.C.P.); (S.K.S.); (S.K.L.); (Y.C.L.)
| | - Yong Chan Lee
- Department of Internal Medicine, Severance Hospital, Yonsei University College of Medicine, Seoul 16995, Korea; (J.C.P.); (S.K.S.); (S.K.L.); (Y.C.L.)
| |
Collapse
|
11
|
Rane RP, de Man EF, Kim J, Görgen K, Tschorn M, Rapp MA, Banaschewski T, Bokde ALW, Desrivieres S, Flor H, Grigis A, Garavan H, Gowland PA, Brühl R, Martinot JL, Martinot MLP, Artiges E, Nees F, Papadopoulos Orfanos D, Lemaitre H, Paus T, Poustka L, Fröhner J, Robinson L, Smolka MN, Winterer J, Whelan R, Schumann G, Walter H, Heinz A, Ritter K. Structural differences in adolescent brains can predict alcohol misuse. eLife 2022; 11:e77545. [PMID: 35616520 PMCID: PMC9255959 DOI: 10.7554/elife.77545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 05/25/2022] [Indexed: 12/02/2022] Open
Abstract
Alcohol misuse during adolescence (AAM) has been associated with disruptive development of adolescent brains. In this longitudinal machine learning (ML) study, we could predict AAM significantly from brain structure (T1-weighted imaging and DTI) with accuracies of 73 -78% in the IMAGEN dataset (n∼1182). Our results not only show that structural differences in brain can predict AAM, but also suggests that such differences might precede AAM behavior in the data. We predicted 10 phenotypes of AAM at age 22 using brain MRI features at ages 14, 19, and 22. Binge drinking was found to be the most predictable phenotype. The most informative brain features were located in the ventricular CSF, and in white matter tracts of the corpus callosum, internal capsule, and brain stem. In the cortex, they were spread across the occipital, frontal, and temporal lobes and in the cingulate cortex. We also experimented with four different ML models and several confound control techniques. Support Vector Machine (SVM) with rbf kernel and Gradient Boosting consistently performed better than the linear models, linear SVM and Logistic Regression. Our study also demonstrates how the choice of the predicted phenotype, ML model, and confound correction technique are all crucial decisions in an explorative ML study analyzing psychiatric disorders with small effect sizes such as AAM.
Collapse
Affiliation(s)
- Roshan Prakash Rane
- Charité – Universitätsmedizin Berlin (corporate member of Freie Universiät at Berlin, Humboldt-Universiät at zu Berlin, and Berlin Institute of Health), Department of Psychiatry and Psychotherapy, Bernstein Center for Computational NeuroscienceBerlinGermany
| | - Evert Ferdinand de Man
- Faculty IV – Electrical Engineering and Computer Science, Technische Universität BerlinBerlinGermany
| | - JiHoon Kim
- Department of Education and Psychology, Freie Universität BerlinBerlinGermany
| | - Kai Görgen
- Charité – Universitätsmedizin Berlin (corporate member of Freie Universiät at Berlin, Humboldt-Universiät at zu Berlin, and Berlin Institute of Health), Department of Psychiatry and Psychotherapy, Bernstein Center for Computational NeuroscienceBerlinGermany
- Science of Intelligence, Research Cluster of ExcellenceBerlinGermany
| | - Mira Tschorn
- Social and Preventive Medicine, Department of Sports and Health Sciences, Intra-faculty unit “Cognitive Sciences”, Faculty of Human Science, and Faculty of Health Sciences Brandenburg, Research Area Services Research and e-Health, University of PotsdamPotsdamGermany
| | - Michael A Rapp
- Social and Preventive Medicine, Department of Sports and Health Sciences, Intra-faculty unit “Cognitive Sciences”, Faculty of Human Science, and Faculty of Health Sciences Brandenburg, Research Area Services Research and e-Health, University of PotsdamPotsdamGermany
| | - Tobias Banaschewski
- Department of Child and Adolescent Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg UniversityMannheimGermany
| | - Arun LW Bokde
- Discipline of Psychiatry, School of Medicine and Trinity College Institute of Neuroscience, Trinity College DublinDublinIreland
| | - Sylvane Desrivieres
- Centre for Population Neuroscience and Precision Medicine (PONS), Institute of Psychiatry, Psychology Neuroscience SGDP Centre, King’s College LondonLondonUnited Kingdom
| | - Herta Flor
- Institute of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg UniversityHeidelbergGermany
- Department of Psychology, School of Social Sciences, University of MannheimMannheimGermany
| | | | - Hugh Garavan
- Departments of Psychiatry and Psychology, University of VermontBurlingtonUnited States
| | - Penny A Gowland
- Sir Peter Mansfield Imaging Centre School of Physics and Astronomy, University of NottinghamNottinghamUnited Kingdom
| | | | - Jean-Luc Martinot
- Institut National de la Santé et de la Recherche Médicale, INSERM U A10 ”Trajectoires développementales en psychiatrie” Universite Paris-Saclay, Ecole Normale Supérieure Paris-Saclay, CNRS, Centre BorelliGif-sur-YvetteFrance
| | - Marie-Laure Paillere Martinot
- Institut National de la Santé et de la Recherche Médicale, INSERM U A10 ”Trajectoires développementales en psychiatrie” Universite Paris-Saclay, Ecole Normale Supérieure Paris-Saclay, CNRS, Centre BorelliGif-sur-YvetteFrance
- AP-HP Sorbonne Université, Department of Child and Adolescent Psychiatry, Pitié-Salpêtrière HospitalParisFrance
| | - Eric Artiges
- Institut National de la Santé et de la Recherche Médicale, INSERM U A10 ”Trajectoires développementales en psychiatrie” Universite Paris-Saclay, Ecole Normale Supérieure Paris-Saclay, CNRS, Centre BorelliGif-sur-YvetteFrance
- Psychiatry Department, EPS Barthélémy DurandEtampesFrance
| | - Frauke Nees
- Department of Child and Adolescent Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg UniversityMannheimGermany
- Institute of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg UniversityHeidelbergGermany
- PONS Research Group, Dept of Psychiatry and Psychotherapy, Campus Charite Mitte, Humboldt UniversityBerlinGermany
| | | | - Herve Lemaitre
- NeuroSpin, CEA, Université Paris-SaclayParisFrance
- Institut des Maladies Neurodégénératives, UMR 5293, CNRS, CEA, University of BordeauxBordeauxFrance
| | - Tomas Paus
- Department of Psychiatry, Faculty of Medicine and Centre Hospitalier Universitaire Sainte-Justine, University of MontrealMontrealCanada
- Departments of Psychiatry and Psychology, University of TorontoTorontoCanada
| | - Luise Poustka
- Department of Child and Adolescent Psychiatry and Psychotherapy, University Medical Centre GöttingenGöttingenGermany
| | - Juliane Fröhner
- Department of Psychiatry and Neuroimaging Center, Technische Universität DresdenDresdenGermany
| | - Lauren Robinson
- Department of Psychological Medicine, Section for Eating Disorders, Institute of Psychiatry, Psychology and Neuroscience, King’s College LondonLondonUnited Kingdom
| | - Michael N Smolka
- Department of Psychiatry and Neuroimaging Center, Technische Universität DresdenDresdenGermany
| | - Jeanne Winterer
- Charité – Universitätsmedizin Berlin (corporate member of Freie Universiät at Berlin, Humboldt-Universiät at zu Berlin, and Berlin Institute of Health), Department of Psychiatry and Psychotherapy, Bernstein Center for Computational NeuroscienceBerlinGermany
- Department of Education and Psychology, Freie Universität BerlinBerlinGermany
| | - Robert Whelan
- School of Psychology and Global Brain Health Institute, Trinity College DublinDublinIreland
| | - Gunter Schumann
- PONS Research Group, Dept of Psychiatry and Psychotherapy, Campus Charite Mitte, Humboldt UniversityBerlinGermany
| | - Henrik Walter
- Charité – Universitätsmedizin Berlin (corporate member of Freie Universiät at Berlin, Humboldt-Universiät at zu Berlin, and Berlin Institute of Health), Department of Psychiatry and Psychotherapy, Bernstein Center for Computational NeuroscienceBerlinGermany
| | - Andreas Heinz
- Charité – Universitätsmedizin Berlin (corporate member of Freie Universiät at Berlin, Humboldt-Universiät at zu Berlin, and Berlin Institute of Health), Department of Psychiatry and Psychotherapy, Bernstein Center for Computational NeuroscienceBerlinGermany
| | - Kerstin Ritter
- Charité – Universitätsmedizin Berlin (corporate member of Freie Universiät at Berlin, Humboldt-Universiät at zu Berlin, and Berlin Institute of Health), Department of Psychiatry and Psychotherapy, Bernstein Center for Computational NeuroscienceBerlinGermany
| | | |
Collapse
|
12
|
Peng W, Chen S, Kong D, Zhou X, Lu X, Chang C. Grade classification of human glioma using a convolutional neural network based on mid-infrared spectroscopy mapping. JOURNAL OF BIOPHOTONICS 2022; 15:e202100313. [PMID: 34931464 DOI: 10.1002/jbio.202100313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2021] [Revised: 11/15/2021] [Accepted: 12/17/2021] [Indexed: 06/14/2023]
Abstract
This study proposes a convolutional neural network (CNN)-based computer-aided diagnosis (CAD) system for the grade classification of human glioma by using mid-infrared (MIR) spectroscopic mappings. Through data augmentation of pixels recombination, the mappings in the training set increased almost 161 times relative to the original mappings. The pixels of the recombined mappings in the training set came from all of the one-dimensional (1D) vibrational spectroscopy of 62 (almost 80% of all 77 patients) patients at specific bands. Compared with the performance of the CNN-CAD system based on the 1D vibrational spectroscopy, we found that the mean diagnostic accuracy of the recombined MIR spectroscopic mappings at peaks of 2917 cm-1 , 1539 cm-1 and 1234 cm-1 on the test set performed higher and the model also had more stable patterns. This research demonstrates that two-dimensional MIR mapping at a single frequency can be used by the CNN-CAD system for diagnosis and the research also gives a prompt that the mapping collection process can be replaced by a single-frequency IR imaging system, which is cheaper and more portable than a Fourier transform infrared microscopy and thus may be widely utilized in hospitals to provide meaningful assistance for pathologists in clinics.
Collapse
Affiliation(s)
- Wenyu Peng
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science, Xi'an Jiaotong University, Xi'an, China
- Innovation Laboratory of Terahertz Biophysics, National Innovation Institute of Defense Technology, Beijing, China
| | - Shuo Chen
- Innovation Laboratory of Terahertz Biophysics, National Innovation Institute of Defense Technology, Beijing, China
| | - Dongsheng Kong
- Department of Neurosurgery, Chinese People's Liberation Army (PLA) General Hospital, Beijing, China
| | - Xiaojie Zhou
- National Facility for Protein Science in Shanghai, Shanghai Advanced Research Institute, Chinese Academy of Science, Shanghai, China
| | - Xiaoyun Lu
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science, Xi'an Jiaotong University, Xi'an, China
| | - Chao Chang
- Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science, Xi'an Jiaotong University, Xi'an, China
- Innovation Laboratory of Terahertz Biophysics, National Innovation Institute of Defense Technology, Beijing, China
| |
Collapse
|
13
|
López-García D, Peñalver JMG, Górriz JM, Ruz M. MVPAlab: A machine learning decoding toolbox for multidimensional electroencephalography data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 214:106549. [PMID: 34910975 DOI: 10.1016/j.cmpb.2021.106549] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 10/30/2021] [Accepted: 11/17/2021] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE The study of brain function has recently expanded from classical univariate to multivariate analyses. These multivariate, machine learning-based algorithms afford neuroscientists extracting more detailed and richer information from the data. However, the implementation of these procedures is usually challenging, especially for researchers with no coding experience. To address this problem, we have developed MVPAlab, a MATLAB-based, flexible decoding toolbox for multidimensional electroencephalography and magnetoencephalography data. METHODS The MVPAlab Toolbox implements several machine learning algorithms to compute multivariate pattern analyses, cross-classification, temporal generalization matrices and feature and frequency contribution analyses. It also provides access to an extensive set of preprocessing routines for, among others, data normalization, data smoothing, dimensionality reduction and supertrial generation. To draw statistical inferences at the group level, MVPAlab includes a non-parametric cluster-based permutation approach. RESULTS A sample electroencephalography dataset was compiled to test all the MVPAlab main functionalities. Significant clusters (p<0.01) were found for the proposed decoding analyses and different configurations, proving the software capability for discriminating between different experimental conditions. CONCLUSIONS This toolbox has been designed to include an easy-to-use and intuitive graphic user interface and data representation software, which makes MVPAlab a very convenient tool for users with few or no previous coding experience. In addition, MVPAlab is not for beginners only, as it implements several high and low-level routines allowing more experienced users to design their own projects in a highly flexible manner.
Collapse
Affiliation(s)
| | - José M G Peñalver
- Mind, Brain and Behavior Research Center, University of Granada, Spain
| | - Juan M Górriz
- Data Science & Computational Intelligence Institute, University of Granada, Spain
| | - María Ruz
- Mind, Brain and Behavior Research Center, Department of Experimental Psychology, University of Granada, Spain
| |
Collapse
|
14
|
Artificial Intelligence in Diagnostic Radiology: Where Do We Stand, Challenges, and Opportunities. J Comput Assist Tomogr 2022; 46:78-90. [PMID: 35027520 DOI: 10.1097/rct.0000000000001247] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
ABSTRACT Artificial intelligence (AI) is the most revolutionizing development in the health care industry in the current decade, with diagnostic imaging having the greatest share in such development. Machine learning and deep learning (DL) are subclasses of AI that show breakthrough performance in image analysis. They have become the state of the art in the field of image classification and recognition. Machine learning deals with the extraction of the important characteristic features from images, whereas DL uses neural networks to solve such problems with better performance. In this review, we discuss the current applications of machine learning and DL in the field of diagnostic radiology.Deep learning applications can be divided into medical imaging analysis and applications beyond analysis. In the field of medical imaging analysis, deep convolutional neural networks are used for image classification, lesion detection, and segmentation. Also used are recurrent neural networks when extracting information from electronic medical records and to augment the use of convolutional neural networks in the field of image classification. Generative adversarial networks have been explicitly used in generating high-resolution computed tomography and magnetic resonance images and to map computed tomography images from the corresponding magnetic resonance imaging. Beyond image analysis, DL can be used for quality control, workflow organization, and reporting.In this article, we review the most current AI models used in medical imaging research, providing a brief explanation of the various models described in the literature within the past 5 years. Emphasis is placed on the various DL models, as they are the most state-of-art in imaging analysis.
Collapse
|
15
|
Evaluation of a Novel Content-Based Image Retrieval System for the Differentiation of Interstitial Lung Diseases in CT Examinations. Diagnostics (Basel) 2021; 11:diagnostics11112114. [PMID: 34829461 PMCID: PMC8624384 DOI: 10.3390/diagnostics11112114] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 11/05/2021] [Accepted: 11/11/2021] [Indexed: 11/17/2022] Open
Abstract
To evaluate the reader's diagnostic performance against the ground truth with and without the help of a novel content-based image retrieval system (CBIR) that retrieves images with similar CT patterns from a database of 79 different interstitial lung diseases. We evaluated three novice readers' and three resident physicians' (with at least three years of experience) diagnostic performance evaluating 50 different CTs featuring 10 different patterns (e.g., honeycombing, tree-in bud, ground glass, bronchiectasis, etc.) and 24 different diseases (sarcoidosis, UIP, NSIP, Aspergillosis, COVID-19 pneumonia etc.). The participants read the cases first without assistance (and without feedback regarding correctness), and with a 2-month interval in a random order with the assistance of the novel CBIR. To invoke the CBIR, a ROI is placed into the pathologic pattern by the reader and the system retrieves diseases with similar patterns. To further narrow the differential diagnosis, the readers can consult an integrated textbook and have the possibility of selecting high-level semantic features representing clinical information (chronic, infectious, smoking status, etc.). We analyzed readers' accuracy without and with CBIR assistance and further tested the hypothesis that the CBIR would help to improve diagnostic performance utilizing Wilcoxon signed rank test. The novice readers demonstrated an unassisted accuracy of 18/28/44%, and an assisted accuracy of 84/82/90%, respectively. The resident physicians demonstrated an unassisted accuracy of 56/56/70%, and an assisted accuracy of 94/90/96%, respectively. For each reader, as well as overall, Sign test demonstrated statistically significant (p < 0.01) difference between the unassisted and the assisted reads. For students and physicians, Chi²-test and Mann-Whitney-U test demonstrated statistically significant (p < 0.01) difference for unassisted reads and statistically insignificant (p > 0.01) difference for assisted reads. The evaluated CBIR relying on pattern analysis and featuring the option to filter the results of the CBIR by predominant characteristics of the diseases via selecting high-level semantic features helped to drastically improve novices' and resident physicians' accuracy in diagnosing interstitial lung diseases in CT.
Collapse
|
16
|
Ueda D, Yamamoto A, Shimazaki A, Walston SL, Matsumoto T, Izumi N, Tsukioka T, Komatsu H, Inoue H, Kabata D, Nishiyama N, Miki Y. Artificial intelligence-supported lung cancer detection by multi-institutional readers with multi-vendor chest radiographs: a retrospective clinical validation study. BMC Cancer 2021; 21:1120. [PMID: 34663260 PMCID: PMC8524996 DOI: 10.1186/s12885-021-08847-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 10/07/2021] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND We investigated the performance improvement of physicians with varying levels of chest radiology experience when using a commercially available artificial intelligence (AI)-based computer-assisted detection (CAD) software to detect lung cancer nodules on chest radiographs from multiple vendors. METHODS Chest radiographs and their corresponding chest CT were retrospectively collected from one institution between July 2017 and June 2018. Two author radiologists annotated pathologically proven lung cancer nodules on the chest radiographs while referencing CT. Eighteen readers (nine general physicians and nine radiologists) from nine institutions interpreted the chest radiographs. The readers interpreted the radiographs alone and then reinterpreted them referencing the CAD output. Suspected nodules were enclosed with a bounding box. These bounding boxes were judged correct if there was significant overlap with the ground truth, specifically, if the intersection over union was 0.3 or higher. The sensitivity, specificity, accuracy, PPV, and NPV of the readers' assessments were calculated. RESULTS In total, 312 chest radiographs were collected as a test dataset, including 59 malignant images (59 nodules of lung cancer) and 253 normal images. The model provided a modest boost to the reader's sensitivity, particularly helping general physicians. The performance of general physicians was improved from 0.47 to 0.60 for sensitivity, from 0.96 to 0.97 for specificity, from 0.87 to 0.90 for accuracy, from 0.75 to 0.82 for PPV, and from 0.89 to 0.91 for NPV while the performance of radiologists was improved from 0.51 to 0.60 for sensitivity, from 0.96 to 0.96 for specificity, from 0.87 to 0.90 for accuracy, from 0.76 to 0.80 for PPV, and from 0.89 to 0.91 for NPV. The overall increase in the ratios of sensitivity, specificity, accuracy, PPV, and NPV were 1.22 (1.14-1.30), 1.00 (1.00-1.01), 1.03 (1.02-1.04), 1.07 (1.03-1.11), and 1.02 (1.01-1.03) by using the CAD, respectively. CONCLUSION The AI-based CAD was able to improve the ability of physicians to detect nodules of lung cancer in chest radiographs. The use of a CAD model can indicate regions physicians may have overlooked during their initial assessment.
Collapse
Affiliation(s)
- Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan.
| | - Akira Yamamoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Akitoshi Shimazaki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Shannon Leigh Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Toshimasa Matsumoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Nobuhiro Izumi
- Department of Surgery, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Takuma Tsukioka
- Department of Surgery, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Hiroaki Komatsu
- Department of Surgery, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Hidetoshi Inoue
- Department of Surgery, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Daijiro Kabata
- Department of Medical Statistics, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Noritoshi Nishiyama
- Department of Surgery, Graduate School of Medicine, Osaka City University, Osaka, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, Osaka, Japan
| |
Collapse
|
17
|
Patra A, Premkumar M, Keshava SN, Chandramohan A, Joseph E, Gibikote S. Radiology Reporting Errors: Learning from Report Addenda. Indian J Radiol Imaging 2021; 31:333-344. [PMID: 34556916 PMCID: PMC8448237 DOI: 10.1055/s-0041-1734351] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
Abstract
Background The addition of new information to a completed radiology report in the form of an "addendum" conveys a variety of information, ranging from less significant typographical errors to serious omissions and misinterpretations. Understanding the reasons for errors and their clinical implications will lead to better clinical governance and radiology practice. Aims This article assesses the common reasons which lead to addenda generation to completed reports and their clinical implications. Subjects and Methods Retrospective study was conducted by reviewing addenda to computed tomography (CT), ultrasound, and magnetic resonance imaging reports between January 2018 to June 2018, to note the frequency and classification of report addenda. Results Rate of addenda generation was 1.1% ( n = 1,076) among the 97,003 approved cross-sectional radiology reports. Errors contributed to 71.2% ( n = 767) of addenda, most commonly communication (29.3%, n = 316) and observational errors (20.8%, n = 224), and 28.7% were nonerrors aimed at providing additional clinically relevant information. Majority of the addenda (82.3%, n = 886) did not have a significant clinical impact. CT and ultrasound reports accounted for 36.9% ( n = 398) and 35.2% ( n = 379) share, respectively. A time gap of 1 to 7 days was noted for 46.8% ( n = 504) addenda and 37.6% ( n = 405) were issued in less than a day. Radiologists with more than 6-year experience created majority (1.5%, n = 456) of addenda. Those which were added to reports generated during emergency hours contributed to 23.2% ( n = 250) of the addenda. Conclusion The study has identified the prevalence of report addenda in a radiology practice involving picture archiving and communication system in a tertiary care center in India. The etiology included both errors and non-errors. Results of this audit were used to generate a checklist and put protocols that will help decrease serious radiology misses and common errors.
Collapse
Affiliation(s)
- Anurima Patra
- Department of Radiology, Christian Medical College, Vellore, Tamil Nadu, India
| | | | | | | | - Elizabeth Joseph
- Department of Radiology, Christian Medical College, Vellore, Tamil Nadu, India
| | - Sridhar Gibikote
- Department of Radiology, Christian Medical College, Vellore, Tamil Nadu, India
| |
Collapse
|
18
|
Intelligent Disease Prediagnosis Only Based on Symptoms. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:9963576. [PMID: 34381587 PMCID: PMC8352683 DOI: 10.1155/2021/9963576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Accepted: 07/09/2021] [Indexed: 11/17/2022]
Abstract
People often concern the relationships between symptoms and diseases when seeking medical advices. In this paper, medical data are divided into three copies, records related to main disease categories, records related to subclass disease types, and records of specific diseases firstly; then two disease recognition methods only based on symptoms for the main disease category identification, subclass disease type identification, and specific disease identification are given. In the methods, a neural network and a support vector machine (SVM) algorithms are adopted, respectively. In the method validation part, accuracy of the two diagnosis methods is tested and compared. Results show that automatic disease prediction only based on symptoms is possible for intelligent medical triage and common disease diagnosis.
Collapse
|
19
|
Abstract
Lung ultrasound is increasingly used in emergency departments, medical wards, and critical care units-adult, pediatric, and neonatal. In vitro and in vivo studies show that the number and type of artifacts visualized change with lung density. This has led to the idea of a quantitative lung ultrasound approach, opening up new prospects for use not only as a diagnostic but also as a monitoring tool. Consequently, the multiple scoring systems proposed in the last few years have different technical approaches and specific clinical indications, adaptable for more or less time-dependent patients. However, multiple scoring systems may generate confusion among physicians aiming at introducing lung ultrasound in their clinical practice. This review describes the various lung ultrasound scoring systems and aims to clarify their use in different settings, focusing on technical aspects, validation with reference techniques, and clinical applications.
Collapse
|
20
|
Spiegel JM, Ehrlich R, Yassi A, Riera F, Wilkinson J, Lockhart K, Barker S, Kistnasamy B. Using Artificial Intelligence for High-Volume Identification of Silicosis and Tuberculosis: A Bio-Ethics Approach. Ann Glob Health 2021; 87:58. [PMID: 34249620 PMCID: PMC8252970 DOI: 10.5334/aogh.3206] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Although Artificial Intelligence (AI) is being increasingly applied, considerable distrust about introducing "disruptive" technologies persists. Intrinsic and contextual factors influencing where and how such innovations are introduced therefore require careful scrutiny to ensure that health equity is promoted. To illustrate one such critical approach, we describe and appraise an AI application - the development of computer assisted diagnosis (CAD) to support more efficient adjudication of compensation claims from former gold miners with occupational lung disease in Southern Africa. In doing so, we apply a bio-ethical lens that considers the principles of beneficence, non-maleficence, autonomy and justice and add explicability as a core principle. We draw on the AI literature, our research on CAD validation and process efficiency, as well as apprehensions of users and stakeholders. Issues of concern included AI accuracy, biased training of AI systems, data privacy, impact on human skill development, transparency and accountability in AI use, as well as intellectual property ownership. We discuss ways in which each of these potential obstacles to successful use of CAD could be mitigated. We conclude that efforts to overcoming technical challenges in applying AI must be accompanied from the onset by attention to ensuring its ethical use.
Collapse
Affiliation(s)
- Jerry M. Spiegel
- School of Population and Public Health, The University of British Columbia, Vancouver, BC, Canada
| | - Rodney Ehrlich
- School of Public Health and Family Medicine, University of Cape Town, Cape Town, South Africa
| | - Annalee Yassi
- School of Population and Public Health, The University of British Columbia, Vancouver, BC, Canada
| | | | | | - Karen Lockhart
- School of Population and Public Health, The University of British Columbia, Vancouver, BC, Canada
| | - Stephen Barker
- School of Population and Public Health, The University of British Columbia, Vancouver, BC, Canada
| | | |
Collapse
|
21
|
Chillakuru YR, Kranen K, Doppalapudi V, Xiong Z, Fu L, Heydari A, Sheth A, Seo Y, Vu T, Sohn JH. High precision localization of pulmonary nodules on chest CT utilizing axial slice number labels. BMC Med Imaging 2021; 21:66. [PMID: 33836677 PMCID: PMC8034095 DOI: 10.1186/s12880-021-00594-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 03/08/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Reidentification of prior nodules for temporal comparison is an important but time-consuming step in lung cancer screening. We develop and evaluate an automated nodule detector that utilizes the axial-slice number of nodules found in radiology reports to generate high precision nodule predictions. METHODS 888 CTs from Lung Nodule Analysis were used to train a 2-dimensional (2D) object detection neural network. A pipeline of 2D object detection, 3D unsupervised clustering, false positive reduction, and axial-slice numbers were used to generate nodule candidates. 47 CTs from the National Lung Cancer Screening Trial (NLST) were used for model evaluation. RESULTS Our nodule detector achieved a precision of 0.962 at a recall of 0.573 on the NLST test set for any nodule. When adjusting for unintended nodule predictions, we achieved a precision of 0.931 at a recall 0.561, which corresponds to 0.06 false positives per CT. Error analysis revealed better detection of nodules with soft tissue attenuation compared to ground glass and undeterminable attenuation. Nodule margins, size, location, and patient demographics did not differ between correct and incorrect predictions. CONCLUSIONS Utilization of axial-slice numbers from radiology reports allowed for development of a lung nodule detector with a low false positive rate compared to prior feature-engineering and machine learning approaches. This high precision nodule detector can reduce time spent on reidentification of prior nodules during lung cancer screening and can rapidly develop new institutional datasets to explore novel applications of computer vision in lung cancer imaging.
Collapse
Affiliation(s)
- Yeshwant Reddy Chillakuru
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Ave, San Francisco, CA, 94143, USA.,George Washington University School of Medicine, 2300 I St NW, Washington, DC, 20052, USA
| | - Kyle Kranen
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Ave, San Francisco, CA, 94143, USA
| | - Vishnu Doppalapudi
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Ave, San Francisco, CA, 94143, USA
| | - Zhangyuan Xiong
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Ave, San Francisco, CA, 94143, USA
| | - Letian Fu
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Ave, San Francisco, CA, 94143, USA
| | - Aarash Heydari
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Ave, San Francisco, CA, 94143, USA
| | - Aditya Sheth
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Ave, San Francisco, CA, 94143, USA
| | - Youngho Seo
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Ave, San Francisco, CA, 94143, USA
| | - Thienkhai Vu
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Ave, San Francisco, CA, 94143, USA
| | - Jae Ho Sohn
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Ave, San Francisco, CA, 94143, USA.
| |
Collapse
|
22
|
Dwivedi K, Sharkey M, Condliffe R, Uthoff JM, Alabed S, Metherall P, Lu H, Wild JM, Hoffman EA, Swift AJ, Kiely DG. Pulmonary Hypertension in Association with Lung Disease: Quantitative CT and Artificial Intelligence to the Rescue? State-of-the-Art Review. Diagnostics (Basel) 2021; 11:diagnostics11040679. [PMID: 33918838 PMCID: PMC8070579 DOI: 10.3390/diagnostics11040679] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 04/05/2021] [Accepted: 04/05/2021] [Indexed: 12/24/2022] Open
Abstract
Accurate phenotyping of patients with pulmonary hypertension (PH) is an integral part of informing disease classification, treatment, and prognosis. The impact of lung disease on PH outcomes and response to treatment remains a challenging area with limited progress. Imaging with computed tomography (CT) plays an important role in patients with suspected PH when assessing for parenchymal lung disease, however, current assessments are limited by their semi-qualitative nature. Quantitative chest-CT (QCT) allows numerical quantification of lung parenchymal disease beyond subjective visual assessment. This has facilitated advances in radiological assessment and clinical correlation of a range of lung diseases including emphysema, interstitial lung disease, and coronavirus disease 2019 (COVID-19). Artificial Intelligence approaches have the potential to facilitate rapid quantitative assessments. Benefits of cross-sectional imaging include ease and speed of scan acquisition, repeatability and the potential for novel insights beyond visual assessment alone. Potential clinical benefits include improved phenotyping and prediction of treatment response and survival. Artificial intelligence approaches also have the potential to aid more focused study of pulmonary arterial hypertension (PAH) therapies by identifying more homogeneous subgroups of patients with lung disease. This state-of-the-art review summarizes recent QCT developments and potential applications in patients with PH with a focus on lung disease.
Collapse
Affiliation(s)
- Krit Dwivedi
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield S10 2RX, UK; (M.S.); (R.C.); (S.A.); (P.M.); (J.M.W.); (A.J.S.); (D.G.K.)
- Correspondence:
| | - Michael Sharkey
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield S10 2RX, UK; (M.S.); (R.C.); (S.A.); (P.M.); (J.M.W.); (A.J.S.); (D.G.K.)
- Radiology Department, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield S10 2JF, UK
| | - Robin Condliffe
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield S10 2RX, UK; (M.S.); (R.C.); (S.A.); (P.M.); (J.M.W.); (A.J.S.); (D.G.K.)
- Sheffield Pulmonary Vascular Disease Unit, Royal Hallamshire Hospital, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield S10 2JF, UK
| | - Johanna M. Uthoff
- Department of Computer Science, University of Sheffield, Sheffield S1 4DP, UK; (J.M.U.); (H.L.)
| | - Samer Alabed
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield S10 2RX, UK; (M.S.); (R.C.); (S.A.); (P.M.); (J.M.W.); (A.J.S.); (D.G.K.)
| | - Peter Metherall
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield S10 2RX, UK; (M.S.); (R.C.); (S.A.); (P.M.); (J.M.W.); (A.J.S.); (D.G.K.)
- Radiology Department, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield S10 2JF, UK
| | - Haiping Lu
- Department of Computer Science, University of Sheffield, Sheffield S1 4DP, UK; (J.M.U.); (H.L.)
- INSIGNEO, Institute for In Silico Medicine, University of Sheffield, Sheffield S1 3JD, UK
| | - Jim M. Wild
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield S10 2RX, UK; (M.S.); (R.C.); (S.A.); (P.M.); (J.M.W.); (A.J.S.); (D.G.K.)
- INSIGNEO, Institute for In Silico Medicine, University of Sheffield, Sheffield S1 3JD, UK
| | - Eric A. Hoffman
- Advanced Pulmonary Physiomic Imaging Laboratory, University of Iowa, C748 GH, Iowa City, IA 52242, USA;
| | - Andrew J. Swift
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield S10 2RX, UK; (M.S.); (R.C.); (S.A.); (P.M.); (J.M.W.); (A.J.S.); (D.G.K.)
- Radiology Department, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield S10 2JF, UK
- INSIGNEO, Institute for In Silico Medicine, University of Sheffield, Sheffield S1 3JD, UK
| | - David G. Kiely
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield S10 2RX, UK; (M.S.); (R.C.); (S.A.); (P.M.); (J.M.W.); (A.J.S.); (D.G.K.)
- Sheffield Pulmonary Vascular Disease Unit, Royal Hallamshire Hospital, Sheffield Teaching Hospitals NHS Foundation Trust, Sheffield S10 2JF, UK
- INSIGNEO, Institute for In Silico Medicine, University of Sheffield, Sheffield S1 3JD, UK
| |
Collapse
|
23
|
Perepelkina T, Fulton AB. Artificial Intelligence (AI) Applications for Age-Related Macular Degeneration (AMD) and Other Retinal Dystrophies. Semin Ophthalmol 2021; 36:304-309. [PMID: 33764255 DOI: 10.1080/08820538.2021.1896756] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Artificial intelligence (AI), with its subdivisions (machine and deep learning), is a new branch of computer science that has shown impressive results across a variety of domains. The applications of AI to medicine and biology are being widely investigated. Medical specialties that rely heavily on images, including radiology, dermatology, oncology and ophthalmology, were the first to explore AI approaches in analysis and diagnosis. Applications of AI in ophthalmology have concentrated on diseases with high prevalence, such as diabetic retinopathy, retinopathy of prematurity, age-related macular degeneration (AMD), and glaucoma. Here we provide an overview of AI applications for diagnosis, classification, and clinical management of AMD and other macular dystrophies.
Collapse
Affiliation(s)
- Tatiana Perepelkina
- Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, United States
| | - Anne B Fulton
- Department of Ophthalmology, Boston Children's Hospital, Harvard Medical School, Boston, United States
| |
Collapse
|
24
|
Ursin F, Timmermann C, Steger F. Ethical Implications of Alzheimer's Disease Prediction in Asymptomatic Individuals through Artificial Intelligence. Diagnostics (Basel) 2021; 11:diagnostics11030440. [PMID: 33806501 PMCID: PMC7998766 DOI: 10.3390/diagnostics11030440] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 02/09/2021] [Accepted: 02/25/2021] [Indexed: 11/25/2022] Open
Abstract
Biomarker-based predictive tests for subjectively asymptomatic Alzheimer’s disease (AD) are utilized in research today. Novel applications of artificial intelligence (AI) promise to predict the onset of AD several years in advance without determining biomarker thresholds. Until now, little attention has been paid to the new ethical challenges that AI brings to the early diagnosis in asymptomatic individuals, beyond contributing to research purposes, when we still lack adequate treatment. The aim of this paper is to explore the ethical arguments put forward for AI aided AD prediction in subjectively asymptomatic individuals and their ethical implications. The ethical assessment is based on a systematic literature search. Thematic analysis was conducted inductively of 18 included publications. The ethical framework includes the principles of autonomy, beneficence, non-maleficence, and justice. Reasons for offering predictive tests to asymptomatic individuals are the right to know, a positive balance of the risk-benefit assessment, and the opportunity for future planning. Reasons against are the lack of disease modifying treatment, the accuracy and explicability of AI aided prediction, the right not to know, and threats to social rights. We conclude that there are serious ethical concerns in offering early diagnosis to asymptomatic individuals and the issues raised by the application of AI add to the already known issues. Nevertheless, pre-symptomatic testing should only be offered on request to avoid inflicted harm. We recommend developing training for physicians in communicating AI aided prediction.
Collapse
|
25
|
Chan HP, Hadjiiski LM, Samala RK. Computer-aided diagnosis in the era of deep learning. Med Phys 2021; 47:e218-e227. [PMID: 32418340 DOI: 10.1002/mp.13764] [Citation(s) in RCA: 99] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Revised: 05/13/2019] [Accepted: 05/13/2019] [Indexed: 12/15/2022] Open
Abstract
Computer-aided diagnosis (CAD) has been a major field of research for the past few decades. CAD uses machine learning methods to analyze imaging and/or nonimaging patient data and makes assessment of the patient's condition, which can then be used to assist clinicians in their decision-making process. The recent success of the deep learning technology in machine learning spurs new research and development efforts to improve CAD performance and to develop CAD for many other complex clinical tasks. In this paper, we discuss the potential and challenges in developing CAD tools using deep learning technology or artificial intelligence (AI) in general, the pitfalls and lessons learned from CAD in screening mammography and considerations needed for future implementation of CAD or AI in clinical use. It is hoped that the past experiences and the deep learning technology will lead to successful advancement and lasting growth in this new era of CAD, thereby enabling CAD to deliver intelligent aids to improve health care.
Collapse
Affiliation(s)
- Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109-5842, USA
| | - Lubomir M Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109-5842, USA
| | - Ravi K Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI, 48109-5842, USA
| |
Collapse
|
26
|
Bouchelouche K, Sathekge MM. Letter from the Editors. Semin Nucl Med 2021; 51:99-101. [PMID: 33509375 DOI: 10.1053/j.semnuclmed.2020.11.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
27
|
Xing F, Zhang X, Cornish TC. Artificial intelligence for pathology. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00011-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
28
|
Lee D, Yoon SN. Application of Artificial Intelligence-Based Technologies in the Healthcare Industry: Opportunities and Challenges. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:E271. [PMID: 33401373 PMCID: PMC7795119 DOI: 10.3390/ijerph18010271] [Citation(s) in RCA: 123] [Impact Index Per Article: 41.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 12/23/2020] [Accepted: 12/24/2020] [Indexed: 12/17/2022]
Abstract
This study examines the current state of artificial intelligence (AI)-based technology applications and their impact on the healthcare industry. In addition to a thorough review of the literature, this study analyzed several real-world examples of AI applications in healthcare. The results indicate that major hospitals are, at present, using AI-enabled systems to augment medical staff in patient diagnosis and treatment activities for a wide range of diseases. In addition, AI systems are making an impact on improving the efficiency of nursing and managerial activities of hospitals. While AI is being embraced positively by healthcare providers, its applications provide both the utopian perspective (new opportunities) and the dystopian view (challenges to overcome). We discuss the details of those opportunities and challenges to provide a balanced view of the value of AI applications in healthcare. It is clear that rapid advances of AI and related technologies will help care providers create new value for their patients and improve the efficiency of their operational processes. Nevertheless, effective applications of AI will require effective planning and strategies to transform the entire care service and operations to reap the benefits of what technologies offer.
Collapse
Affiliation(s)
- DonHee Lee
- College of Business Administration, Inha University, Incheon 22212, Korea;
| | - Seong No Yoon
- Department of Business Edward Waters College, Jacksonville, FL 32209, USA
| |
Collapse
|
29
|
Yang Q, Xu H, Tang X, Hu C, Wang P, Wáng YXJ, Wang Y, Ma G, Zhang B. Medical Imaging Engineering and Technology Branch of the Chinese Society of Biomedical Engineering expert consensus on the application of Emergency Mobile Cabin CT. Quant Imaging Med Surg 2020; 10:2191-2207. [PMID: 33139998 DOI: 10.21037/qims-20-980] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Started during December 2019, following the emergence of several COVID-19 cases in Wuhan City, Hubei Province, there was a rapid surge and spread of new COVID-19 cases throughout China. The disease has since been included in the Class B infectious diseases category, as stipulated in the Law of the People's Republic of China on the Prevention and Treatment of Infectious Diseases and shall be managed according to Class A infectious diseases. During the early phases of COVID-19 infection, no specific pulmonary imaging features may be evident, or features overlapping with other pneumonia may be observed. Although CT is not the gold standard for the diagnosis of COVID-19, it nonetheless is a convenient and fast method, and its application can be deployed in community hospitals. Furthermore, CT can be used to render a suggestive diagnosis and evaluate the severity as well as the effects of therapeutic interventions for typical cases of COVID-19. The mobile emergency special CT device described in this document (also known as Emergency Mobile Cabin CT) has several unique characteristics, including its mobility, flexibility, and networking capabilities. Furthermore, it adopts a fully independent isolation design to avoid cross-infection between patients and medical staff. It can play an important role in screening suspected cases presenting with imaging features of COVID-19 in hospitals of various levels that provide care to suspected or confirmed COVID-19 patients as part of the first line procedures of epidemic prevention and control.
Collapse
Affiliation(s)
- Qi Yang
- Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Haibo Xu
- Zhongnan Hospital of Wuhan University, Wuhan, China
| | | | - Chunhong Hu
- The First Affiliated Hospital of Soochow University, Soochow, China
| | - Peijun Wang
- Tongji Hospital of Tongji University, Shanghai, China
| | - Yì Xiáng J Wáng
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong, China
| | - Yaofa Wang
- Minfound Medical Systems Co. Ltd, Shaoxing, China
| | - Guolin Ma
- China-Japan Friendship Hospital, Beijing, China
| | - Bing Zhang
- The Affiliated Drum Tower Hospital of Nanjing University Medical School, Nanjing, China
| |
Collapse
|
30
|
Tajaldeen A, Alghamdi S. Evaluation of radiologist's knowledge about the Artificial Intelligence in diagnostic radiology: a survey-based study. Acta Radiol Open 2020; 9:2058460120945320. [PMID: 32821436 PMCID: PMC7412626 DOI: 10.1177/2058460120945320] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Accepted: 07/02/2020] [Indexed: 12/31/2022] Open
Abstract
Background Advanced developments in diagnostic radiology have provided a rapid increase in the number of radiological investigations worldwide. Recently, Artificial Intelligence (AI) has been applied in diagnostic radiology. The purpose of developing such applications is to clinically validate and make them feasible for the current practice of diagnostic radiology, in which there is less time for diagnosis. Purpose To assess radiologists’ knowledge about AI’s role and establish a baseline to help in providing educational activities on AI in diagnostic radiology in Saudi Arabia. Material and Methods An online questionnaire was designed using QuestionPro software. The study was conducted in large hospitals located in different regions in Saudi Arabia. A total of 93 participants completed the questionnaire, of which 32 (34%) were trainee radiologists from year 1 to year 4 (R1–R4) of the residency programme, 33 (36%) were radiologists and fellows, and 28 (30%) were consultants. Results The responses to the question related to the use of AI on a daily basis illustrated that 76 (82%) of the participants were not using any AI software at all during daily interpretation of diagnostic images. Only 17 (18%) reported that they used AI software for diagnostic radiology. Conclusion There is a significant lack of knowledge about AI in our residency programme and radiology departments at hospitals. Due to the rapid development of AI and its application in diagnostic radiology, there is an urgent need to enhance awareness about its role in different diagnostic fields.
Collapse
Affiliation(s)
- Abdulrahman Tajaldeen
- Radiological Science Department, College of Applied Medical Science, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Salem Alghamdi
- Department of Medical Imaging and Radiation Sciences, Collage of Applied Medical Sciences, University of Jeddah, Jeddah, Saudi Arabia
| |
Collapse
|
31
|
Pallua JD, Brunner A, Zelger B, Schirmer M, Haybaeck J. The future of pathology is digital. Pathol Res Pract 2020; 216:153040. [PMID: 32825928 DOI: 10.1016/j.prp.2020.153040] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 05/31/2020] [Indexed: 02/07/2023]
Abstract
Information, archives, and intelligent artificial systems are part of everyday life in modern medicine. They already support medical staff by mapping their workflows with shared availability of cases' referral information, as needed for example, by the pathologist, and this support will be increased in the future even more. In radiology, established standards define information models, data transmission mechanisms, and workflows. Other disciplines, such as pathology, cardiology, and radiation therapy, now define further demands in addition to these established standards. Pathology may have the highest technical demands on the systems, with very complex workflows, and the digitization of slides generating enormous amounts of data up to Gigabytes per biopsy. This requires enormous amounts of data to be generated per biopsy, up to the gigabyte range. Digital pathology allows a change from classical histopathological diagnosis with microscopes and glass slides to virtual microscopy on the computer, with multiple tools using artificial intelligence and machine learning to support pathologists in their future work.
Collapse
Affiliation(s)
- J D Pallua
- Department of Pathology, Neuropathology and Molecular Pathology, Medical University of Innsbruck, Müllerstraße 44, A-6020, Innsbruck, Austria.
| | - A Brunner
- Department of Pathology, Neuropathology and Molecular Pathology, Medical University of Innsbruck, Müllerstraße 44, A-6020, Innsbruck, Austria
| | - B Zelger
- Department of Pathology, Neuropathology and Molecular Pathology, Medical University of Innsbruck, Müllerstraße 44, A-6020, Innsbruck, Austria
| | - M Schirmer
- Department of Internal Medicine, Clinic II, Medical University of Innsbruck, Anichstrasse 35, A-6020, Innsbruck, Austria
| | - J Haybaeck
- Department of Pathology, Neuropathology and Molecular Pathology, Medical University of Innsbruck, Müllerstraße 44, A-6020, Innsbruck, Austria; Department of Pathology, Medical Faculty, Otto-von-Guericke University Magdeburg, Leipzigerstrasse 44, D-Magdeburg, Germany; Diagnostic & Research Center for Molecular BioMedicine, Institute of Pathology, Medical University of Graz, Neue Stiftingtalstrasse 6, A-8010, Graz, Austria
| |
Collapse
|
32
|
Nagi R, Aravinda K, Rakesh N, Gupta R, Pal A, Mann AK. Clinical applications and performance of intelligent systems in dental and maxillofacial radiology: A review. Imaging Sci Dent 2020; 50:81-92. [PMID: 32601582 PMCID: PMC7314602 DOI: 10.5624/isd.2020.50.2.81] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2019] [Revised: 01/12/2020] [Accepted: 02/12/2020] [Indexed: 12/25/2022] Open
Abstract
Intelligent systems (i.e., artificial intelligence), particularly deep learning, are machines able to mimic the cognitive functions of humans to perform tasks of problem-solving and learning. This field deals with computational models that can think and act intelligently, like the human brain, and construct algorithms that can learn from data to make predictions. Artificial intelligence is becoming important in radiology due to its ability to detect abnormalities in radiographic images that are unnoticed by the naked human eye. These systems have reduced radiologists' workload by rapidly recording and presenting data, and thereby monitoring the treatment response with a reduced risk of cognitive bias. Intelligent systems have an important role to play and could be used by dentists as an adjunct to other imaging modalities in making appropriate diagnoses and treatment plans. In the field of maxillofacial radiology, these systems have shown promise for the interpretation of complex images, accurate localization of landmarks, characterization of bone architecture, estimation of oral cancer risk, and the assessment of metastatic lymph nodes, periapical pathologies, and maxillary sinus pathologies. This review discusses the clinical applications and scope of intelligent systems such as machine learning, artificial intelligence, and deep learning programs in maxillofacial imaging.
Collapse
Affiliation(s)
- Ravleen Nagi
- Department of Oral Medicine and Radiology, Swami Devi Dyal Hospital and Dental College, Panchkula, India
| | - Konidena Aravinda
- Department of Oral Medicine and Radiology, Swami Devi Dyal Hospital and Dental College, Panchkula, India
| | - N Rakesh
- Department of Oral Medicine and Radiology, Faculty of Dental Sciences, M.S. Ramaiah University of Applied Sciences, Bengaluru, Karnataka, India
| | - Rajesh Gupta
- Department of Oral Medicine and Radiology, Swami Devi Dyal Hospital and Dental College, Panchkula, India
| | - Ajay Pal
- Department of Oral Medicine and Radiology, Swami Devi Dyal Hospital and Dental College, Panchkula, India
| | - Amrit Kaur Mann
- Department of Oral Medicine and Radiology, Swami Devi Dyal Hospital and Dental College, Panchkula, India
| |
Collapse
|
33
|
Comparative Analysis of Rhino-Cytological Specimens with Image Analysis and Deep Learning Techniques. ELECTRONICS 2020. [DOI: 10.3390/electronics9060952] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Cytological study of the nasal mucosa (also known as rhino-cytology) represents an important diagnostic aid that allows highlighting of the presence of some types of rhinitis through the analysis of cellular features visible under a microscope. Nowadays, the automated detection and classification of cells benefit from the capacity of deep learning techniques in processing digital images of the cytological preparation. Even though the results of such automatic systems need to be validated by a specialized rhino-cytologist, this technology represents a valid support that aims at increasing the accuracy of the analysis while reducing the required time and effort. The quality of the rhino-cytological preparation, which is clearly important for the microscope observation phase, is also fundamental for the automatic classification process. In fact, the slide-preparing technique turns out to be a crucial factor among the multiple ones that may modify the morphological and chromatic characteristics of the cells. This paper aims to investigate the possible differences between direct smear (SM) and cytological centrifugation (CYT) slide-preparation techniques, in order to preserve image quality during the observation and cell classification phases in rhino-cytology. Firstly, a comparative study based on image analysis techniques has been put forward. The extraction of densitometric and morphometric features has made it possible to quantify and describe the spatial distribution of the cells in the field images observed under the microscope. Statistical analysis of the distribution of these features has been used to evaluate the degree of similarity between images acquired from SM and CYT slides. The results prove an important difference in the observation process of the cells prepared with the above-mentioned techniques, with reference to cell density and spatial distribution: the analysis of CYT slides has been more difficult than of the SM ones due to the spatial distribution of the cells, which results in a lower cell density than the SM slides. As a marginal part of this study, a performance assessment of the computer-aided diagnosis (CAD) system called Rhino-cyt has also been carried out on both groups of image slide types.
Collapse
|
34
|
Ngam PI, Ong CC, Chai P, Wong SS, Liang CR, Teo LLS. Computed tomography coronary angiography - past, present and future. Singapore Med J 2020; 61:109-115. [PMID: 32488269 DOI: 10.11622/smedj.2020028] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Computed tomography coronary angiography (CTCA) is a robust and reliable non-invasive alternative imaging modality to invasive coronary angiography, which is the reference standard in evaluating the degree of coronary artery stenosis. CTCA has high negative predictive value and can confidently exclude significant coronary artery disease (CAD) in low to intermediate risk patients. Over the years, substantial effort has been made to reduce the radiation dose and increase the cost efficiency of CTCA. In this review, we present the evolution of computed tomography scanners in the context of coronary artery imaging as well as its clinical applications and limitations. We also highlight the future directions of CTCA as a one-stop non-invasive imaging modality for anatomic and functional assessment of CAD.
Collapse
Affiliation(s)
- Pei Ing Ngam
- Department of Diagnostic Imaging, National University Hospital, Singapore
| | - Ching Ching Ong
- Department of Diagnostic Imaging, National University Hospital, Singapore
| | - Ping Chai
- Department of Cardiology, National University Heart Centre Singapore, Singapore
| | - Siong Sung Wong
- Department of Cardiology, National University Heart Centre Singapore, Singapore
| | - Chong Ri Liang
- Department of Diagnostic Imaging, National University Hospital, Singapore
| | - Lynette Li San Teo
- Department of Diagnostic Imaging, National University Hospital, Singapore
| |
Collapse
|
35
|
Abdalla-Aslan R, Yeshua T, Kabla D, Leichter I, Nadler C. An artificial intelligence system using machine-learning for automatic detection and classification of dental restorations in panoramic radiography. Oral Surg Oral Med Oral Pathol Oral Radiol 2020; 130:593-602. [PMID: 32646672 DOI: 10.1016/j.oooo.2020.05.012] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Revised: 04/28/2020] [Accepted: 05/22/2020] [Indexed: 12/19/2022]
Abstract
OBJECTIVES The aim of this study was to develop a computer vision algorithm based on artificial intelligence, designed to automatically detect and classify various dental restorations on panoramic radiographs. STUDY DESIGN A total of 738 dental restorations in 83 anonymized panoramic images were analyzed. Images were automatically cropped to obtain the region of interest containing maxillary and mandibular alveolar ridges. Subsequently, the restorations were segmented by using a local adaptive threshold. The segmented restorations were classified into 11 categories, and the algorithm was trained to classify them. Numerical features based on the shape and distribution of gray level values extracted by the algorithm were used for classifying the restorations into different categories. Finally, a Cubic Support Vector Machine algorithm with Error-Correcting Output Codes was used with a cross-validation approach for the multiclass classification of the restorations according to these features. RESULTS The algorithm detected 94.6% of the restorations. Classification eliminated all erroneous marks, and ultimately, 90.5% of the restorations were marked on the image. The overall accuracy of the classification stage in discriminating between the true restoration categories was 93.6%. CONCLUSIONS This machine-learning algorithm demonstrated excellent performance in detecting and classifying dental restorations on panoramic images.
Collapse
Affiliation(s)
- Ragda Abdalla-Aslan
- Researcher, Attending Physician, Department of Oral and Maxillofacial Surgery, Rambam Health Care Campus, Haifa, Israel
| | - Talia Yeshua
- Lecturer, Department of Applied Physics/Electro-optics Engineering, The Jerusalem College of Technology, Jerusalem, Israel
| | - Daniel Kabla
- Department of Electrical and Electronics Engineering, The Jerusalem College of Technology, Jerusalem, Israel
| | - Isaac Leichter
- Professor Emeritus, Department of Applied Physics/Electro-optics Engineering, The Jerusalem College of Technology, Jerusalem, Israel
| | - Chen Nadler
- Lecturer, Oral Maxillofacial Imaging Unit, Oral Medicine Department, the Hebrew University, Hadassah School of Dental Medicine, Ein Kerem, Hadassah Medical Center Jerusalem, Israel.
| |
Collapse
|
36
|
Abstract
Rhinology studies anatomy, physiology and diseases affecting the nasal region: one of the most modern techniques to diagnose these diseases is nasal cytology or rhinocytology, which involves analyzing the cells contained in the nasal mucosa under a microscope and researching of other elements such as bacteria, to suspect a pathology. During the microscopic observation, bacteria can be detected in the form of biofilm, that is, a bacterial colony surrounded by an organic extracellular matrix, with a protective function, made of polysaccharides. In the field of nasal cytology, the presence of biofilm in microscopic samples denotes the presence of an infection. In this paper, we describe the design and testing of interesting diagnostic support, for the automatic detection of biofilm, based on a convolutional neural network (CNN). To demonstrate the reliability of the system, alternative solutions based on isolation forest and deep random forest techniques were also tested. Texture analysis is used, with Haralick feature extraction and dominant color. The CNN-based biofilm detection system shows an accuracy of about 98%, an average accuracy of about 100% on the test set and about 99% on the validation set. The CNN-based system designed in this study is confirmed as the most reliable among the best automatic image recognition technologies, in the specific context of this study. The developed system allows the specialist to obtain a rapid and accurate identification of the biofilm in the slide images.
Collapse
|
37
|
Ishii E, Ebner DK, Kimura S, Agha-Mir-Salim L, Uchimido R, Celi LA. The advent of medical artificial intelligence: lessons from the Japanese approach. J Intensive Care 2020; 8:35. [PMID: 32467762 PMCID: PMC7236126 DOI: 10.1186/s40560-020-00452-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 04/28/2020] [Indexed: 02/07/2023] Open
Abstract
Artificial intelligence or AI has been heralded as the most transformative technology in healthcare, including critical care medicine. Globally, healthcare specialists and health ministries are being pressured to create and implement a roadmap to incorporate applications of AI into care delivery. To date, the majority of Japan’s approach to AI has been anchored in industry, and the challenges that have occurred therein offer important lessons for nations developing new AI strategies. Notably, the demand for an AI-literate workforce has outpaced training programs and knowledge. This is particularly observable within medicine, where clinicians may be unfamiliar with the technology. National policy and private sector involvement have shown promise in developing both workforce and AI applications in healthcare. In combination with Japan’s unique national healthcare system and aggregable healthcare and socioeconomic data, Japan has a rich opportunity to lead in the field of medical AI.
Collapse
Affiliation(s)
- Euma Ishii
- 1Department of Global Health Promotion, Tokyo Medical and Dental University, 1 Chome-5-45 Yushima, Bunkyo City, Tokyo, 113-8510 Japan.,2Department of Intensive Care Medicine, Tokyo Medical and Dental University, 1 Chome-5-45 Yushima, Bunkyo City, Tokyo, 113-8510 Japan.,3Institute for Medical Engineering and Science, Massachusetts Institute of Technology, 77 Massachusetts Avenue, E25-505, Cambridge, MA 02142 USA
| | - Daniel K Ebner
- 4Alpert Medical School of Brown University, 222 Richmond St, Providence, RI 02906 USA
| | - Satoshi Kimura
- 5Department of Anesthesiology and Resuscitation, Okayama University Hospital, 2-5-1 Shikata-cho, Kita-ku, Okayama, 700-8558 Japan
| | - Louis Agha-Mir-Salim
- 6Faculty of Medicine, University of Southampton, University Road, Southampton, SO17 1BJ UK
| | - Ryo Uchimido
- 2Department of Intensive Care Medicine, Tokyo Medical and Dental University, 1 Chome-5-45 Yushima, Bunkyo City, Tokyo, 113-8510 Japan.,7Beth Israel Deaconess Medical Center, 330 Brookline Avenue, Boston, MA 02215 USA
| | - Leo A Celi
- 3Institute for Medical Engineering and Science, Massachusetts Institute of Technology, 77 Massachusetts Avenue, E25-505, Cambridge, MA 02142 USA.,7Beth Israel Deaconess Medical Center, 330 Brookline Avenue, Boston, MA 02215 USA
| |
Collapse
|
38
|
Fanizzi A, Basile TMA, Losurdo L, Bellotti R, Bottigli U, Dentamaro R, Didonna V, Fausto A, Massafra R, Moschetta M, Popescu O, Tamborra P, Tangaro S, La Forgia D. A machine learning approach on multiscale texture analysis for breast microcalcification diagnosis. BMC Bioinformatics 2020; 21:91. [PMID: 32164532 PMCID: PMC7069158 DOI: 10.1186/s12859-020-3358-4] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023] Open
Abstract
Background Screening programs use mammography as primary diagnostic tool for detecting breast cancer at an early stage. The diagnosis of some lesions, such as microcalcifications, is still difficult today for radiologists. In this paper, we proposed an automatic binary model for discriminating tissue in digital mammograms, as support tool for the radiologists. In particular, we compared the contribution of different methods on the feature selection process in terms of the learning performances and selected features. Results For each ROI, we extracted textural features on Haar wavelet decompositions and also interest points and corners detected by using Speeded Up Robust Feature (SURF) and Minimum Eigenvalue Algorithm (MinEigenAlg). Then a Random Forest binary classifier is trained on a subset of a sub-set features selected by two different kinds of feature selection techniques, such as filter and embedded methods. We tested the proposed model on 260 ROIs extracted from digital mammograms of the BCDR public database. The best prediction performance for the normal/abnormal and benign/malignant problems reaches a median AUC value of 98.16% and 92.08%, and an accuracy of 97.31% and 88.46%, respectively. The experimental result was comparable with related work performance. Conclusions The best performing result obtained with embedded method is more parsimonious than the filter one. The SURF and MinEigen algorithms provide a strong informative content useful for the characterization of microcalcification clusters.
Collapse
Affiliation(s)
- Annarita Fanizzi
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", viale O. Flacco 65, Bari, Italy
| | - Teresa M A Basile
- Dip. Interateneo di Fisica "M. Merlin", Università degli Studi di Bari "A. Moro", via G. Amendola 173, Bari, Italy.,INFN - Istituto Nazionale di Fisica Nucleare, sezione di Bari, via G. Amendola 173, Bari, Italy
| | - Liliana Losurdo
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", viale O. Flacco 65, Bari, Italy.
| | - Roberto Bellotti
- Dip. Interateneo di Fisica "M. Merlin", Università degli Studi di Bari "A. Moro", via G. Amendola 173, Bari, Italy.,INFN - Istituto Nazionale di Fisica Nucleare, sezione di Bari, via G. Amendola 173, Bari, Italy
| | - Ubaldo Bottigli
- Dip. di Scienze Fisiche, della Terra e dell'Ambiente, Università degli Studi di Siena, strada Laterina 2, Siena, Italy
| | - Rosalba Dentamaro
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", viale O. Flacco 65, Bari, Italy
| | - Vittorio Didonna
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", viale O. Flacco 65, Bari, Italy
| | - Alfonso Fausto
- Dip. di Diagnostica delle Immagini, Ospedale Universitario di Siena, viale Bracci 16, Siena, Italy
| | - Raffaella Massafra
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", viale O. Flacco 65, Bari, Italy
| | - Marco Moschetta
- Dip. Interdisciplinare di Medicina, Università degli Studi di Bari "A. Moro", piazza G. Cesare 11, Bari, Italy
| | - Ondina Popescu
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", viale O. Flacco 65, Bari, Italy
| | - Pasquale Tamborra
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", viale O. Flacco 65, Bari, Italy
| | - Sabina Tangaro
- INFN - Istituto Nazionale di Fisica Nucleare, sezione di Bari, via G. Amendola 173, Bari, Italy
| | - Daniele La Forgia
- I.R.C.C.S. Istituto Tumori "Giovanni Paolo II", viale O. Flacco 65, Bari, Italy
| |
Collapse
|
39
|
Sriram I, Harland R, Lowenstein SR. I, EHR. J Hosp Med 2020; 15:119-120. [PMID: 31112500 DOI: 10.12788/jhm.3211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Affiliation(s)
- Indira Sriram
- University of Colorado School of Medicine, Aurora, Colorado
| | - Robin Harland
- University of Colorado School of Medicine, Aurora, Colorado
| | - Steven R Lowenstein
- University of Colorado School of Medicine, Aurora, Colorado
- Department of Emergency Medicine and Office of the Dean, University of Colorado School of Medicine, Aurora, Colorado
| |
Collapse
|
40
|
Kim YJ, Ganbold B, Kim KG. Web-Based Spine Segmentation Using Deep Learning in Computed Tomography Images. Healthc Inform Res 2020; 26:61-67. [PMID: 32082701 PMCID: PMC7010941 DOI: 10.4258/hir.2020.26.1.61] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 01/16/2020] [Accepted: 01/18/2020] [Indexed: 12/28/2022] Open
Abstract
Objectives Back pain, especially lower back pain, is experienced in 60% to 80% of adults at some points during their lives. Various studies have found that lower back pain is a very common problem among adolescents, and the highest incidence rates are for adults in their 30s. There has been a remarkable increase in using computer-aided diagnosis to assist doctors in the interpretation of medical images. Spine segmentation in computed tomography (CT) scans using algorithmic methods allows improved diagnosis of back pain. Methods In this study, we developed a web-based automatic spine segmentation method using deep learning and obtained the dice coefficient by comparison with the predicted image. Our method is based on convolutional neural networks for segmentation. More specifically, we train a hierarchical data format file using U-Net architecture and then insert the test data label to perform segmentation. Thus, we obtained more specific and detailed results. A total of 344 CT images were used in the experiment. Of these, 330 were used for learning, and the remaining 14 for testing. Results Our method achieved an average dice coefficient of 90.4%, a precision of 96.81%, and an F1-score of 91.64%. Conclusions The proposed web-based deep learning approach can be very practical and accurate for spine segmentation as a diagnostic method.
Collapse
Affiliation(s)
- Young Jae Kim
- Department of Biomedical Engineering, College of Health Science, Gachon University, Incheon, Korea.,Department of Biomedical Engineering, College of Medicine, Gachon Uinversity, Incheon, Korea.,Medical Device R&D Center, Biomedical & Convergence Institute, Gachon University Gil Hospital, Incheon, Korea
| | - Bilegt Ganbold
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology (GAIHST), Gachon University, Incheon, Korea
| | - Kwang Gi Kim
- Department of Biomedical Engineering, College of Health Science, Gachon University, Incheon, Korea.,Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology (GAIHST), Gachon University, Incheon, Korea.,Department of Biomedical Engineering, College of Medicine, Gachon Uinversity, Incheon, Korea.,Medical Device R&D Center, Biomedical & Convergence Institute, Gachon University Gil Hospital, Incheon, Korea
| |
Collapse
|
41
|
|
42
|
Pérez-Medina C, Teunissen AJ, Kluza E, Mulder WJ, van der Meel R. Nuclear imaging approaches facilitating nanomedicine translation. Adv Drug Deliv Rev 2020; 154-155:123-141. [PMID: 32721459 DOI: 10.1016/j.addr.2020.07.017] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 07/08/2020] [Accepted: 07/17/2020] [Indexed: 02/07/2023]
Abstract
Nanomedicine approaches can effectively modulate the biodistribution and bioavailability of therapeutic agents, improving their therapeutic index. However, despite the ever-increasing amount of literature reporting on preclinical nanomedicine, the number of nanotherapeutics receiving FDA approval remains relatively low. Several barriers exist that hamper the effective preclinical evaluation and clinical translation of nanotherapeutics. Key barriers include insufficient understanding of nanomedicines' in vivo behavior, inadequate translation from murine models to larger animals, and a lack of patient stratification strategies. Integrating quantitative non-invasive imaging techniques in nanomedicine development offers attractive possibilities to address these issues. Among the available imaging techniques, nuclear imaging by positron emission tomography (PET) and single-photon emission computed tomography (SPECT) are highly attractive in this context owing to their quantitative nature and uncontested sensitivity. In basic and translational research, nuclear imaging techniques can provide critical quantitative information about pharmacokinetic parameters, biodistribution profiles or target site accumulation of nanocarriers and their associated payload. During clinical evaluation, nuclear imaging can be used to select patients amenable to nanomedicine treatment. Here, we review how nuclear imaging-based approaches are increasingly being integrated into nanomedicine development and discuss future developments that will accelerate their clinical translation.
Collapse
|
43
|
Choi G, Nam BD, Hwang JH, Kim KU, Kim HJ, Kim DW. Missed Lung Cancers on Chest Radiograph: An Illustrative Review of Common Blind Spots on Chest Radiograph with Emphasis on Various Radiologic Presentations of Lung Cancers. JOURNAL OF THE KOREAN SOCIETY OF RADIOLOGY 2020; 81:351-364. [PMID: 36237379 PMCID: PMC9431813 DOI: 10.3348/jksr.2020.81.2.351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 08/14/2019] [Accepted: 08/24/2019] [Indexed: 11/15/2022]
Abstract
Missed lung cancers on chest radiograph (CXR) may delay the diagnosis and affect the prognosis. CXR is the primary imaging modality to evaluate the lungs and mediastinum in daily practice. The purpose of this article is to review chest radiographs for common blind spots and highlight the importance of various radiologic presentations in primary lung cancer to avoid significant diagnostic errors on CXR.
Collapse
Affiliation(s)
- Goun Choi
- Department of Radiology, Soonchunhyang University Hospital, Seoul, Korea
| | - Bo Da Nam
- Department of Radiology, Soonchunhyang University Hospital, Seoul, Korea
| | - Jung Hwa Hwang
- Department of Radiology, Soonchunhyang University Hospital, Seoul, Korea
| | - Ki-Up Kim
- Department of Respiratory and Allergy Medicine, Soonchunhyang University Hospital, Seoul, Korea
| | - Hyun Jo Kim
- Department of Cardiothoracic Surgery, Soonchunhyang University Hospital, Seoul, Korea
| | - Dong Won Kim
- Department of Pathology, Soonchunhyang University Hospital, Seoul, Korea
| |
Collapse
|
44
|
Saadeh H, Abdullah N, Erashdi M, Sughayer M, Al-Kadi O. Histopathologist-level quantification of Ki-67 immunoexpression in gastroenteropancreatic neuroendocrine tumors using semiautomated method. J Med Imaging (Bellingham) 2019; 7:012704. [PMID: 31824983 DOI: 10.1117/1.jmi.7.1.012704] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 11/18/2019] [Indexed: 11/14/2022] Open
Abstract
The role of Ki-67 index in determining the prognosis and management of gastroenteropancreatic neuroendocrine tumors (GEP-NETs) has become more important yet presents a challenging assessment dilemma. Although the precise method of Ki-67 index evaluation has not been standardized, several methods have been proposed, and each has its pros and cons. Our study proposes an imaging semiautomated informatics framework [semiautomated counting (SAC)] using the popular biomedical imaging tool "ImageJ" to quantify Ki-67 index of the GEP-NETs using camera-captured images of tumor hotspots. It aims to assist pathologists in achieving an accurate and rapid interpretation of Ki-67 index and better reproducibility of the results with minimal human interaction and calibration. Twenty cases of resected GEP-NETs with Ki-67 staining that had been done for diagnostic purposes have been randomly selected from the pathology archive. All of these cases were reviewed in a multidisciplinary cancer center between 2012 and 2019. For each case, the Ki-67 immunostained slide was evaluated and five camera-captured images at 40 × magnification were taken. Prints of images were used by three pathologists to manually count the tumor cells. The digital versions of the images were used for the semiautomated cell counting using ImageJ. Statistical analysis of the Ki-67 index correlation between the proposed method and the MC revealed strong agreement on all the cases evaluates ( n = 20 ), with an intraclass correlation coefficient of 0.993, "95% CI: 0.984 to 0.997." The results obtained from the SAC are promising and demonstrate the capability of this methodology for the development of reproducible and accurate semiautomated quantitative pathological assessments. ImageJ features are investigated carefully and accurately fine-tuned to obtain the optimal sequence of steps that will accurately calculate Ki-67 index. SAC is able to accurately grade all the cases evaluated perfectly mating histopathologists' manual grading, providing reliable and efficient solution for Ki-67 index assessment.
Collapse
Affiliation(s)
- Heba Saadeh
- The University of Jordan, King Abdullah II School for IT, Computer Science Department, Amman, Jordan
| | - Niveen Abdullah
- King Hussein Cancer Center, Department of Pathology and Laboratory Medicine, Al-Jubeiha, Amman, Jordan
| | - Madiha Erashdi
- King Hussein Cancer Center, Department of Pathology and Laboratory Medicine, Al-Jubeiha, Amman, Jordan
| | - Maher Sughayer
- King Hussein Cancer Center, Department of Pathology and Laboratory Medicine, Al-Jubeiha, Amman, Jordan
| | - Omar Al-Kadi
- The University of Jordan, King Abdullah II School for IT, Information Technology Department, Amman, Jordan
| |
Collapse
|
45
|
Kim J, Kim KH. Measuring the Effects of Education in Detecting Lung Cancer on Chest Radiographs: Utilization of a New Assessment Tool. JOURNAL OF CANCER EDUCATION : THE OFFICIAL JOURNAL OF THE AMERICAN ASSOCIATION FOR CANCER EDUCATION 2019; 34:1213-1218. [PMID: 30255391 DOI: 10.1007/s13187-018-1431-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This study was designed to evaluate the effect of group and individualized educational lectures to accurately interpret chest radiographs of lung cancer patients and to introduce a new educational tool in evaluating skills for reading chest radiographs. Utilizing "hotspot" technology will be instrumental in measuring the effect of education in interpreting chest radiographs. There were 48 participants in the study. Chest radiographs of 100 lung cancer patients and 11 healthy patients taken at various time points were used for evaluation. Using "hotspot" technology, lesions on each radiograph were outlined. Values were taken at baseline, after which the group received lectures. Several days later, they underwent exam 2. Exam 3 was conducted after individualized lectures. A final exam was taken after the participants underwent individualized training within 2 months. Scores significantly improved after the individual lessons (p < 0.001). This improvement in performance decreased in the final examination. Statistically significant differences were observed between exam 2 vs. exam 3 and exam 3 vs. the final exam (p < 0.001, p < 0.001). Participants demonstrated more improvement in detecting lesions in abnormal chest radiographs than in identifying normal ones. Although there was significant improvement in detecting abnormal radiographs by the end of the study (p < 0.001), no improvement was observed in detecting normal ones. We measured lung cancer detection rate using a new "hotspot" detection tool for chest radiographs. With the proposed scoring system, this tool could be objectively used in evaluating the educational effects.
Collapse
Affiliation(s)
- Junghyun Kim
- Veterans Health Service Medical Center, Seoul, Republic of Korea
| | - Kwan Hyoung Kim
- Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, Uijeongbu St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.
| |
Collapse
|
46
|
Yoon Y, Hwang T, Choi H, Lee H. Classification of radiographic lung pattern based on texture analysis and machine learning. J Vet Sci 2019; 20:e44. [PMID: 31364328 PMCID: PMC6669202 DOI: 10.4142/jvs.2019.20.e44] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Revised: 05/08/2019] [Accepted: 07/05/2019] [Indexed: 11/20/2022] Open
Abstract
This study evaluated the feasibility of using texture analysis and machine learning to distinguish radiographic lung patterns. A total of 1200 regions of interest (ROIs) including four specific lung patterns (normal, alveolar, bronchial, and unstructured interstitial) were obtained from 512 thoracic radiographs of 252 dogs and 65 cats. Forty-four texture parameters based on eight methods of texture analysis (first-order statistics, spatial gray-level-dependence matrices, gray-level-difference statistics, gray-level run length image statistics, neighborhood gray-tone difference matrices, fractal dimension texture analysis, Fourier power spectrum, and Law's texture energy measures) were used to extract textural features from the ROIs. The texture parameters of each lung pattern were compared and used for training and testing of artificial neural networks. Classification performance was evaluated by calculating accuracy and the area under the receiver operating characteristic curve (AUC). Forty texture parameters showed significant differences between the lung patterns. The accuracy of lung pattern classification was 99.1% in the training dataset and 91.9% in the testing dataset. The AUCs were above 0.98 in the training set and above 0.92 in the testing dataset. Texture analysis and machine learning algorithms may potentially facilitate the evaluation of medical images.
Collapse
Affiliation(s)
- Youngmin Yoon
- Institute of Animal Medicine, College of Veterinary Medicine, Gyeongsang National University, Jinju 52828, Korea
| | - Taesung Hwang
- Institute of Animal Medicine, College of Veterinary Medicine, Gyeongsang National University, Jinju 52828, Korea
| | - Hojung Choi
- College of Veterinary Medicine, Chungnam National University, Daejeon 34134, Korea
| | - Heechun Lee
- Institute of Animal Medicine, College of Veterinary Medicine, Gyeongsang National University, Jinju 52828, Korea.
| |
Collapse
|
47
|
Savelli B, Bria A, Molinara M, Marrocco C, Tortorella F. A multi-context CNN ensemble for small lesion detection. Artif Intell Med 2019; 103:101749. [PMID: 32143786 DOI: 10.1016/j.artmed.2019.101749] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2019] [Revised: 10/23/2019] [Accepted: 10/27/2019] [Indexed: 12/27/2022]
Abstract
In this paper, we propose a novel method for the detection of small lesions in digital medical images. Our approach is based on a multi-context ensemble of convolutional neural networks (CNNs), aiming at learning different levels of image spatial context and improving detection performance. The main innovation behind the proposed method is the use of multiple-depth CNNs, individually trained on image patches of different dimensions and then combined together. In this way, the final ensemble is able to find and locate abnormalities on the images by exploiting both the local features and the surrounding context of a lesion. Experiments were focused on two well-known medical detection problems that have been recently faced with CNNs: microcalcification detection on full-field digital mammograms and microaneurysm detection on ocular fundus images. To this end, we used two publicly available datasets, INbreast and E-ophtha. Statistically significantly better detection performance were obtained by the proposed ensemble with respect to other approaches in the literature, demonstrating its effectiveness in the detection of small abnormalities.
Collapse
Affiliation(s)
- B Savelli
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| | - A Bria
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| | - M Molinara
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| | - C Marrocco
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| | - F Tortorella
- Department of Electrical, Information Engineering and Applied Mathematics, University of Salerno, via Giovanni Paolo II 132, 84084 Fisciano (SA), Italy.
| |
Collapse
|
48
|
Borkmann S, Geterud K, Lundstam S, Hellström M. Frequency and radiological characteristics of previously overlooked renal cell carcinoma. Acta Radiol 2019; 60:1348-1359. [PMID: 30700094 DOI: 10.1177/0284185118823362] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Affiliation(s)
- Simon Borkmann
- Department of Radiology, Institute of Clinical Sciences, The Sahlgrenska Academy, Gothenburg University, Gothenburg, Sweden
- The Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Kjell Geterud
- Department of Radiology, Institute of Clinical Sciences, The Sahlgrenska Academy, Gothenburg University, Gothenburg, Sweden
- The Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Sven Lundstam
- The Sahlgrenska University Hospital, Gothenburg, Sweden
- Department of Urology, Institute of Clinical Sciences, The Sahlgrenska Academy, Gothenburg University, Gothenburg, Sweden
| | - Mikael Hellström
- Department of Radiology, Institute of Clinical Sciences, The Sahlgrenska Academy, Gothenburg University, Gothenburg, Sweden
- The Sahlgrenska University Hospital, Gothenburg, Sweden
| |
Collapse
|
49
|
Comparison of 18F-FDG avidity at PET of benign and malignant pure ground-glass opacities: a paradox? Part II: artificial neural network integration of the PET/CT characteristics of ground-glass opacities to predict their likelihood of malignancy. Clin Radiol 2019; 74:692-696. [PMID: 31202569 DOI: 10.1016/j.crad.2019.04.024] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2019] [Accepted: 04/26/2019] [Indexed: 02/07/2023]
Abstract
AIM To assess the ability of artificial neural networks (ANNs) to predict the likelihood of malignancy of pure ground-glass opacities (GGOs), using observations from computed tomography (CT) and 2-[18F]-fluoro-2-deoxy-d-glucose (FDG) positron-emission tomography (PET) images and relevant clinical information. MATERIALS AND METHODS One hundred and twenty-five cases of pure GGOs described in a previous article were used to train and evaluate the performance of an ANN to predict the likelihood of malignancy in each of the GGOs. Eighty-five cases selected randomly were used for training the network and the remaining 40 cases for testing. The ANN was constructed from the image data and basic clinical information. The predictions of the ANN were compared with blinded expert estimates of the likelihood of malignancy. RESULTS The ANN showed excellent predictive value in estimating the likelihood of malignancy (AUC = 0.98±0.02). Employing the optimal cut-off point from the receiver operating characteristic (ROC) curve, the ANN correctly identified 11/11 malignant lesions (sensitivity 100%) and 27/29 benign lesions (specificity 93.1%). The expert readers found 23 lesions indeterminate and correctly identified 17 lesions as benign. CONCLUSION ANNs have potential to improve diagnostic certainty in the classification of pure GGOs, based upon their CT appearance, intensity of FDG uptake, and relevant clinical information, and may therefore, be useful to help direct clinical and imaging follow-up.
Collapse
|
50
|
Implementing Precision Medicine and Artificial Intelligence in Plastic Surgery: Concepts and Future Prospects. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2019; 7:e2113. [PMID: 31044104 PMCID: PMC6467615 DOI: 10.1097/gox.0000000000002113] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Accepted: 11/28/2018] [Indexed: 12/17/2022]
Abstract
Precision medicine, or the individualization of evidence-based medicine, is forthcoming. As surgeons, we must be prepared for the integration of patient and system factors. Plastic surgeons regard themselves as innovators and early adopters. As such, we need our adaptability now more than ever to implement digital advancements and precision medicine into our practices. The integration of artificial intelligence (AI) technology and the capture of big data techniques should foster the next great leaps in medicine and surgery, allowing us to capture the detailed minutiae of precision medicine. The algorithmic process of artificial neural networks will guide large-scale analysis of data, including features such as pattern recognition and rapid quantification, to organize and distribute data to surgeons seamlessly. This vast digital collection of information, commonly termed “big data,” is only one potential application of AI. By incorporating big data, the cognitive abilities of a surgeon can be complemented by the computer to improve patient-centered care. Furthermore, the use of AI will provide individual patients with increased access to the broadening world of precision medicine. Therefore, plastic surgeons must learn how to use AI within the contexts of our practices to keep up with an evolving field in medicine. Although rudimentary in its practice, we present a glimpse of the potential applications of AI in plastic surgery to incorporate the practice of precision medicine into the care that we deliver.
Collapse
|