1
|
Lee SE, Hong H, Kim EK. Positive Predictive Values of Abnormality Scores From a Commercial Artificial Intelligence-Based Computer-Aided Diagnosis for Mammography. Korean J Radiol 2024; 25:343-350. [PMID: 38528692 PMCID: PMC10973732 DOI: 10.3348/kjr.2023.0907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 11/17/2023] [Accepted: 12/05/2023] [Indexed: 03/27/2024] Open
Abstract
OBJECTIVE Artificial intelligence-based computer-aided diagnosis (AI-CAD) is increasingly used in mammography. While the continuous scores of AI-CAD have been related to malignancy risk, the understanding of how to interpret and apply these scores remains limited. We investigated the positive predictive values (PPVs) of the abnormality scores generated by a deep learning-based commercial AI-CAD system and analyzed them in relation to clinical and radiological findings. MATERIALS AND METHODS From March 2020 to May 2022, 656 breasts from 599 women (mean age 52.6 ± 11.5 years, including 0.6% [4/599] high-risk women) who underwent mammography and received positive AI-CAD results (Lunit Insight MMG, abnormality score ≥ 10) were retrospectively included in this study. Univariable and multivariable analyses were performed to evaluate the associations between the AI-CAD abnormality scores and clinical and radiological factors. The breasts were subdivided according to the abnormality scores into groups 1 (10-49), 2 (50-69), 3 (70-89), and 4 (90-100) using the optimal binning method. The PPVs were calculated for all breasts and subgroups. RESULTS Diagnostic indications and positive imaging findings by radiologists were associated with higher abnormality scores in the multivariable regression analysis. The overall PPV of AI-CAD was 32.5% (213/656) for all breasts, including 213 breast cancers, 129 breasts with benign biopsy results, and 314 breasts with benign outcomes in the follow-up or diagnostic studies. In the screening mammography subgroup, the PPVs were 18.6% (58/312) overall and 5.1% (12/235), 29.0% (9/31), 57.9% (11/19), and 96.3% (26/27) for score groups 1, 2, 3, and 4, respectively. The PPVs were significantly higher in women with diagnostic indications (45.1% [155/344]), palpability (51.9% [149/287]), fatty breasts (61.2% [60/98]), and certain imaging findings (masses with or without calcifications and distortion). CONCLUSION PPV increased with increasing AI-CAD abnormality scores. The PPVs of AI-CAD satisfied the acceptable PPV range according to Breast Imaging-Reporting and Data System for screening mammography and were higher for diagnostic mammography.
Collapse
Affiliation(s)
- Si Eun Lee
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Republic of Korea
| | - Hanpyo Hong
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Republic of Korea
| | - Eun-Kyung Kim
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Republic of Korea.
| |
Collapse
|
2
|
Sajithkumar A, Thomas J, Saji AM, Ali F, E K HH, Adampulan HAG, Sarathchand S. Artificial Intelligence in pathology: current applications, limitations, and future directions. Ir J Med Sci 2024; 193:1117-1121. [PMID: 37542634 DOI: 10.1007/s11845-023-03479-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 07/26/2023] [Indexed: 08/07/2023]
Abstract
PURPOSE Given AI's recent success in computer vision applications, majority of pathologists anticipate that it will be able to assist them with a variety of digital pathology activities. Massive improvements in deep learning have enabled a synergy between Artificial Intelligence (AI) and deep learning, enabling image-based diagnosis against the backdrop of digital pathology. AI-based solutions are being developed to eliminate errors and save pathologists time. AIMS In this paper, we will discuss the components that went into the use of Artificial Intelligence in Pathology, its use in the medical profession, the obstacles and constraints that it encounters, and the future possibilities of AI in the medical field. CONCLUSIONS Based on these factors, we elaborate upon the use of AI in medical pathology and provide future recommendations for its successful implementation in this field.
Collapse
Affiliation(s)
- Akhil Sajithkumar
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India.
| | - Jubin Thomas
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Ajish Meprathumalil Saji
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Fousiya Ali
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Haneena Hasin E K
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Hannan Abdul Gafoor Adampulan
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Swathy Sarathchand
- Sree Narayana Institute of Medical Sciences, Chalakka - Kuthiathode Rd, North Kuthiathode, Kunnukara, Kerala, 683594, India
| |
Collapse
|
3
|
Ciet P, Eade C, Ho ML, Laborie LB, Mahomed N, Naidoo J, Pace E, Segal B, Toso S, Tschauner S, Vamyanmane DK, Wagner MW, Shelmerdine SC. The unintended consequences of artificial intelligence in paediatric radiology. Pediatr Radiol 2024; 54:585-593. [PMID: 37665368 DOI: 10.1007/s00247-023-05746-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 08/07/2023] [Accepted: 08/08/2023] [Indexed: 09/05/2023]
Abstract
Over the past decade, there has been a dramatic rise in the interest relating to the application of artificial intelligence (AI) in radiology. Originally only 'narrow' AI tasks were possible; however, with increasing availability of data, teamed with ease of access to powerful computer processing capabilities, we are becoming more able to generate complex and nuanced prediction models and elaborate solutions for healthcare. Nevertheless, these AI models are not without their failings, and sometimes the intended use for these solutions may not lead to predictable impacts for patients, society or those working within the healthcare profession. In this article, we provide an overview of the latest opinions regarding AI ethics, bias, limitations, challenges and considerations that we should all contemplate in this exciting and expanding field, with a special attention to how this applies to the unique aspects of a paediatric population. By embracing AI technology and fostering a multidisciplinary approach, it is hoped that we can harness the power AI brings whilst minimising harm and ensuring a beneficial impact on radiology practice.
Collapse
Affiliation(s)
- Pierluigi Ciet
- Department of Radiology and Nuclear Medicine, Erasmus MC - Sophia's Children's Hospital, Rotterdam, The Netherlands
- Department of Medical Sciences, University of Cagliari, Cagliari, Italy
| | | | - Mai-Lan Ho
- University of Missouri, Columbia, MO, USA
| | - Lene Bjerke Laborie
- Department of Radiology, Section for Paediatrics, Haukeland University Hospital, Bergen, Norway
- Department of Clinical Medicine, University of Bergen, Bergen, Norway
| | - Nasreen Mahomed
- Department of Radiology, University of Witwatersrand, Johannesburg, South Africa
| | - Jaishree Naidoo
- Paediatric Diagnostic Imaging, Dr J Naidoo Inc., Johannesburg, South Africa
- Envisionit Deep AI Ltd, Coveham House, Downside Bridge Road, Cobham, UK
| | - Erika Pace
- Department of Diagnostic Radiology, The Royal Marsden NHS Foundation Trust, London, UK
| | - Bradley Segal
- Department of Radiology, University of Witwatersrand, Johannesburg, South Africa
| | - Seema Toso
- Pediatric Radiology, Children's Hospital, University Hospitals of Geneva, Geneva, Switzerland
| | - Sebastian Tschauner
- Division of Paediatric Radiology, Department of Radiology, Medical University of Graz, Graz, Austria
| | - Dhananjaya K Vamyanmane
- Department of Pediatric Radiology, Indira Gandhi Institute of Child Health, Bangalore, India
| | - Matthias W Wagner
- Department of Diagnostic Imaging, Division of Neuroradiology, The Hospital for Sick Children, Toronto, Canada
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
- Department of Neuroradiology, University Hospital Augsburg, Augsburg, Germany
| | - Susan C Shelmerdine
- Department of Clinical Radiology, Great Ormond Street Hospital for Children NHS Foundation Trust, Great Ormond Street, London, WC1H 3JH, UK.
- Great Ormond Street Hospital for Children, UCL Great Ormond Street Institute of Child Health, London, UK.
- NIHR Great Ormond Street Hospital Biomedical Research Centre, 30 Guilford Street, Bloomsbury, London, UK.
- Department of Clinical Radiology, St George's Hospital, London, UK.
| |
Collapse
|
4
|
Al Muhaisen S, Safi O, Ulayan A, Aljawamis S, Fakhoury M, Baydoun H, Abuquteish D. Artificial Intelligence-Powered Mammography: Navigating the Landscape of Deep Learning for Breast Cancer Detection. Cureus 2024; 16:e56945. [PMID: 38665752 PMCID: PMC11044525 DOI: 10.7759/cureus.56945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/26/2024] [Indexed: 04/28/2024] Open
Abstract
Worldwide, breast cancer (BC) is one of the most commonly diagnosed malignancies in women. Early detection is key to improving survival rates and health outcomes. This literature review focuses on how artificial intelligence (AI), especially deep learning (DL), can enhance the ability of mammography, a key tool in BC detection, to yield more accurate results. Artificial intelligence has shown promise in reducing diagnostic errors and increasing early cancer detection chances. Nevertheless, significant challenges exist, including the requirement for large amounts of high-quality data and concerns over data privacy. Despite these hurdles, AI and DL are advancing the field of radiology, offering better ways to diagnose, detect, and treat diseases. The U.S. Food and Drug Administration (FDA) has approved several AI diagnostic tools. Yet, the full potential of these technologies, especially for more advanced screening methods like digital breast tomosynthesis (DBT), depends on further clinical studies and the development of larger databases. In summary, this review highlights the exciting potential of AI in BC screening. It calls for more research and validation to fully employ the power of AI in clinical practice, ensuring that these technologies can help save lives by improving diagnosis accuracy and efficiency.
Collapse
Affiliation(s)
| | - Omar Safi
- Medicine, Faculty of Medicine, The Hashemite University, Zarqa, JOR
| | - Ahmad Ulayan
- Medicine, Faculty of Medicine, The Hashemite University, Zarqa, JOR
| | - Sara Aljawamis
- Medicine, Faculty of Medicine, The Hashemite University, Zarqa, JOR
| | - Maryam Fakhoury
- Medicine, Faculty of Medicine, The Hashemite University, Zarqa, JOR
| | - Haneen Baydoun
- Diagnostic Radiology, King Hussein Cancer Center, Amman, JOR
| | - Dua Abuquteish
- Microbiology, Pathology and Forensic Medicine, Faculty of Medicine, The Hashemite University, Zarqa, JOR
- Pathology and Laboratory Medicine, King Hussein Cancer Center, Amman, JOR
| |
Collapse
|
5
|
Abdul NS, Shivakumar GC, Sangappa SB, Di Blasio M, Crimi S, Cicciù M, Minervini G. Applications of artificial intelligence in the field of oral and maxillofacial pathology: a systematic review and meta-analysis. BMC Oral Health 2024; 24:122. [PMID: 38263027 PMCID: PMC10804575 DOI: 10.1186/s12903-023-03533-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 10/11/2023] [Indexed: 01/25/2024] Open
Abstract
BACKGROUND Since AI algorithms can analyze patient data, medical records, and imaging results to suggest treatment plans and predict outcomes, they have the potential to support pathologists and clinicians in the diagnosis and treatment of oral and maxillofacial pathologies, just like every other area of life in which it is being used. The goal of the current study was to examine all of the trends being investigated in the area of oral and maxillofacial pathology where AI has been possibly involved in helping practitioners. METHODS We started by defining the important terms in our investigation's subject matter. Following that, relevant databases like PubMed, Scopus, and Web of Science were searched using keywords and synonyms for each concept, such as "machine learning," "diagnosis," "treatment planning," "image analysis," "predictive modelling," and "patient monitoring." For more papers and sources, Google Scholar was also used. RESULTS The majority of the 9 studies that were chosen were on how AI can be utilized to diagnose malignant tumors of the oral cavity. AI was especially helpful in creating prediction models that aided pathologists and clinicians in foreseeing the development of oral and maxillofacial pathology in specific patients. Additionally, predictive models accurately identified patients who have a high risk of developing oral cancer as well as the likelihood of the disease returning after treatment. CONCLUSIONS In the field of oral and maxillofacial pathology, AI has the potential to enhance diagnostic precision, personalize care, and ultimately improve patient outcomes. The development and application of AI in healthcare, however, necessitates careful consideration of ethical, legal, and regulatory challenges. Additionally, because AI is still a relatively new technology, caution must be taken when applying it to this industry.
Collapse
Affiliation(s)
- Nishath Sayed Abdul
- Department of OMFS & Diagnostic Sciences, College of Dentistry, Riyadh Elm, University, Riyadh, Saudi Arabia
| | - Ganiga Channaiah Shivakumar
- Department of Oral Medicine and Radiology, People's College of Dental Sciences and Research Centre, People's University, Bhopal, 462037, India.
| | - Sunila Bukanakere Sangappa
- Department of Prosthodontics and Crown & Bridge, JSS Dental College and Hospital, JSS Academy of Higher Education and Research, Mysuru, Karnataka, India
| | - Marco Di Blasio
- Department of Medicine and Surgery, University Center of Dentistry, University of Parma, 43126, Parma, Italy.
| | - Salvatore Crimi
- Department of Biomedical and Surgical and Biomedical Sciences, Catania University, 95123, Catania, CT, Italy
| | - Marco Cicciù
- Department of Biomedical and Surgical and Biomedical Sciences, Catania University, 95123, Catania, CT, Italy
| | - Giuseppe Minervini
- Saveetha Dental College & Hospitals, Saveetha Institute of Medical & Technical Sciences, Saveetha University, Chennai, India.
- Multidisciplinary Department of Medical-Surgical and Odontostomatological Specialties, University of Campania "Luigi Vanvitelli", Naples, Italy.
| |
Collapse
|
6
|
Rokhshad R, Mohammad-Rahimi H, Price JB, Shoorgashti R, Abbasiparashkouh Z, Esmaeili M, Sarfaraz B, Rokhshad A, Motamedian SR, Soltani P, Schwendicke F. Artificial intelligence for classification and detection of oral mucosa lesions on photographs: a systematic review and meta-analysis. Clin Oral Investig 2024; 28:88. [PMID: 38217733 DOI: 10.1007/s00784-023-05475-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 12/21/2023] [Indexed: 01/15/2024]
Abstract
OBJECTIVE This study aimed to review and synthesize studies using artificial intelligence (AI) for classifying, detecting, or segmenting oral mucosal lesions on photographs. MATERIALS AND METHOD Inclusion criteria were (1) studies employing AI to (2) classify, detect, or segment oral mucosa lesions, (3) on oral photographs of human subjects. Included studies were assessed for risk of bias using Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). A PubMed, Scopus, Embase, Web of Science, IEEE, arXiv, medRxiv, and grey literature (Google Scholar) search was conducted until June 2023, without language limitation. RESULTS After initial searching, 36 eligible studies (from 8734 identified records) were included. Based on QUADAS-2, only 7% of studies were at low risk of bias for all domains. Studies employed different AI models and reported a wide range of outcomes and metrics. The accuracy of AI for detecting oral mucosal lesions ranged from 74 to 100%, while that for clinicians un-aided by AI ranged from 61 to 98%. Pooled diagnostic odds ratio for studies which evaluated AI for diagnosing or discriminating potentially malignant lesions was 155 (95% confidence interval 23-1019), while that for cancerous lesions was 114 (59-221). CONCLUSIONS AI may assist in oral mucosa lesion screening while the expected accuracy gains or further health benefits remain unclear so far. CLINICAL RELEVANCE Artificial intelligence assists oral mucosa lesion screening and may foster more targeted testing and referral in the hands of non-specialist providers, for example. So far, it remains unclear if accuracy gains compared with specialized can be realized.
Collapse
Affiliation(s)
- Rata Rokhshad
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI On Health, Berlin, Germany
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI On Health, Berlin, Germany
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Daneshjoo Blvd, Evin, Shahid Chamran Highway, Tehran, Postal Code: 1983963113, Iran
| | - Jeffery B Price
- Department of Oncology and Diagnostic Sciences, University of Maryland, School of Dentistry, Baltimore, Maryland 650 W Baltimore St, Baltimore, MD, 21201, USA
| | - Reyhaneh Shoorgashti
- Faculty of Dentistry, Tehran Medical Sciences, Islamic Azad University, 9Th Neyestan, Pasdaran, Tehran, Iran
| | | | - Mahdieh Esmaeili
- Faculty of Dentistry, Tehran Medical Sciences, Islamic Azad University, 9Th Neyestan, Pasdaran, Tehran, Iran
| | - Bita Sarfaraz
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Daneshjoo Blvd, Evin, Shahid Chamran Highway, Tehran, Postal Code: 1983963113, Iran
| | - Arad Rokhshad
- Faculty of Dentistry, Tehran Medical Sciences, Islamic Azad University, 9Th Neyestan, Pasdaran, Tehran, Iran
| | - Saeed Reza Motamedian
- Dentofacial Deformities Research Center, Research Institute of Dental Sciences & Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Daneshjoo Blvd, Evin, Shahid Chamran Highway, Tehran, Postal Code: 1983963113, Iran.
| | - Parisa Soltani
- Department of Oral and Maxillofacial Radiology, Dental Implants Research Center, Dental Research Institute, School of Dentistry, Isfahan University of Medical Sciences, Salamat Blv, Isfahan Dental School, Isfahan, Iran
- Department of Neurosciences, Reproductive and Odontostomatological Sciences, University of Naples Federico II, Nepales, Italy
| | - Falk Schwendicke
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI On Health, Berlin, Germany
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité - Universitätsmedizin Berlin, Charitépl. 1, 10117, Berlin, Germany
| |
Collapse
|
7
|
Melnyk O, Ismail A, Ghorashi NS, Heekin M, Javan R. Generative Artificial Intelligence Terminology: A Primer for Clinicians and Medical Researchers. Cureus 2023; 15:e49890. [PMID: 38174178 PMCID: PMC10762565 DOI: 10.7759/cureus.49890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/04/2023] [Indexed: 01/05/2024] Open
Abstract
Generative artificial intelligence (AI) is rapidly transforming the medical field, as advanced tools powered by large language models (LLMs) make their way into clinical practice, research, and education. Chatbots, which can generate human-like responses, have gained attention for their potential applications. Therefore, familiarity with LLMs and other promising generative AI tools is crucial to harness their potential safely and effectively. As these AI-based technologies continue to evolve, medical professionals must develop a strong understanding of AI terminologies and concepts, particularly generative AI, to effectively tackle real-world challenges and create solutions. This knowledge will enable healthcare professionals to utilize AI-driven innovations for improved patient care and increased productivity in the future. In this brief technical report, we explore 20 of the most relevant terminology associated with the underlying technology behind LLMs and generative AI as they relate to the medical field and provide some examples of how these topics relate to healthcare applications to help in their understanding.
Collapse
Affiliation(s)
- Oleksiy Melnyk
- Department of Radiology, George Washington University School of Medicine and Health Sciences, Washington D.C., USA
| | - Ahmed Ismail
- Department of Radiology, George Washington University School of Medicine and Health Sciences, Washington D.C., USA
| | - Nima S Ghorashi
- Department of Radiology, George Washington University School of Medicine and Health Sciences, Washington D.C., USA
| | - Mary Heekin
- Department of Radiology, George Washington University School of Medicine and Health Sciences, Washington D.C., USA
| | - Ramin Javan
- Department of Radiology, George Washington University School of Medicine and Health Sciences, Washington D.C., USA
| |
Collapse
|
8
|
Amasya H, Alkhader M, Serindere G, Futyma-Gąbka K, Aktuna Belgin C, Gusarev M, Ezhov M, Różyło-Kalinowska I, Önder M, Sanders A, Costa ALF, de Castro Lopes SLP, Orhan K. Evaluation of a Decision Support System Developed with Deep Learning Approach for Detecting Dental Caries with Cone-Beam Computed Tomography Imaging. Diagnostics (Basel) 2023; 13:3471. [PMID: 37998607 PMCID: PMC10669958 DOI: 10.3390/diagnostics13223471] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 11/12/2023] [Accepted: 11/16/2023] [Indexed: 11/25/2023] Open
Abstract
This study aims to investigate the effect of using an artificial intelligence (AI) system (Diagnocat, Inc., San Francisco, CA, USA) for caries detection by comparing cone-beam computed tomography (CBCT) evaluation results with and without the software. 500 CBCT volumes are scored by three dentomaxillofacial radiologists for the presence of caries separately on a five-point confidence scale without and with the aid of the AI system. After visual evaluation, the deep convolutional neural network (CNN) model generated a radiological report and observers scored again using AI interface. The ground truth was determined by a hybrid approach. Intra- and inter-observer agreements are evaluated with sensitivity, specificity, accuracy, and kappa statistics. A total of 6008 surfaces are determined as 'presence of caries' and 13,928 surfaces are determined as 'absence of caries' for ground truth. The area under the ROC curve of observer 1, 2, and 3 are found to be 0.855/0.920, 0.863/0.917, and 0.747/0.903, respectively (unaided/aided). Fleiss Kappa coefficients are changed from 0.325 to 0.468, and the best accuracy (0.939) is achieved with the aided results. The radiographic evaluations performed with aid of the AI system are found to be more compatible and accurate than unaided evaluations in the detection of dental caries with CBCT images.
Collapse
Affiliation(s)
- Hakan Amasya
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Istanbul University-Cerrahpaşa, Istanbul 34320, Türkiye;
- CAST (Cerrahpasa Research, Simulation and Design Laboratory), Istanbul University-Cerrahpaşa, Istanbul 34320, Türkiye
- Health Biotechnology Joint Research and Application Center of Excellence, Istanbul 34220, Türkiye
| | - Mustafa Alkhader
- Department of Oral Medicine and Oral Surgery, Faculty of Dentistry, Jordan University of Science and Technology, Irbid 22110, Jordan;
| | - Gözde Serindere
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Mustafa Kemal University, Hatay 31060, Türkiye; (G.S.); (C.A.B.)
| | - Karolina Futyma-Gąbka
- Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, 20-093 Lublin, Poland; (K.F.-G.); or (I.R.-K.)
| | - Ceren Aktuna Belgin
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Mustafa Kemal University, Hatay 31060, Türkiye; (G.S.); (C.A.B.)
| | - Maxim Gusarev
- Diagnocat, Inc., San Francisco, CA 94102, USA; (M.G.); (M.E.); (A.S.)
| | - Matvey Ezhov
- Diagnocat, Inc., San Francisco, CA 94102, USA; (M.G.); (M.E.); (A.S.)
| | - Ingrid Różyło-Kalinowska
- Department of Dental and Maxillofacial Radiodiagnostics, Medical University of Lublin, 20-093 Lublin, Poland; (K.F.-G.); or (I.R.-K.)
| | - Merve Önder
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara 0600, Türkiye;
| | - Alex Sanders
- Diagnocat, Inc., San Francisco, CA 94102, USA; (M.G.); (M.E.); (A.S.)
| | - Andre Luiz Ferreira Costa
- Postgraduate Program in Dentistry, Cruzeiro do Sul University (UNICSUL), São Paulo 08060-070, SP, Brazil;
| | - Sérgio Lúcio Pereira de Castro Lopes
- Science and Technology Institute, Department of Diagnosis and Surgery, São Paulo State University (UNESP), São José dos Campos 01049-010, SP, Brazil;
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara 0600, Türkiye;
- Research Center (MEDITAM), Ankara University Medical Design Application, Ankara 06560, Türkiye
- Department of Oral Diagnostics, Faculty of Dentistry, Semmelweis University, 1088 Budapest, Hungary
| |
Collapse
|
9
|
Zhong NN, Wang HQ, Huang XY, Li ZZ, Cao LM, Huo FY, Liu B, Bu LL. Enhancing head and neck tumor management with artificial intelligence: Integration and perspectives. Semin Cancer Biol 2023; 95:52-74. [PMID: 37473825 DOI: 10.1016/j.semcancer.2023.07.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 07/11/2023] [Accepted: 07/15/2023] [Indexed: 07/22/2023]
Abstract
Head and neck tumors (HNTs) constitute a multifaceted ensemble of pathologies that primarily involve regions such as the oral cavity, pharynx, and nasal cavity. The intricate anatomical structure of these regions poses considerable challenges to efficacious treatment strategies. Despite the availability of myriad treatment modalities, the overall therapeutic efficacy for HNTs continues to remain subdued. In recent years, the deployment of artificial intelligence (AI) in healthcare practices has garnered noteworthy attention. AI modalities, inclusive of machine learning (ML), neural networks (NNs), and deep learning (DL), when amalgamated into the holistic management of HNTs, promise to augment the precision, safety, and efficacy of treatment regimens. The integration of AI within HNT management is intricately intertwined with domains such as medical imaging, bioinformatics, and medical robotics. This article intends to scrutinize the cutting-edge advancements and prospective applications of AI in the realm of HNTs, elucidating AI's indispensable role in prevention, diagnosis, treatment, prognostication, research, and inter-sectoral integration. The overarching objective is to stimulate scholarly discourse and invigorate insights among medical practitioners and researchers to propel further exploration, thereby facilitating superior therapeutic alternatives for patients.
Collapse
Affiliation(s)
- Nian-Nian Zhong
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Han-Qi Wang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Xin-Yue Huang
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Zi-Zhan Li
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Lei-Ming Cao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Fang-Yi Huo
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China
| | - Bing Liu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China; Department of Oral & Maxillofacial - Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China.
| | - Lin-Lin Bu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China; Department of Oral & Maxillofacial - Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan 430079, China.
| |
Collapse
|
10
|
Singh Y, Kelm ZS, Faghani S, Erickson D, Yalon T, Bancos I, Erickson BJ. Deep learning approach for differentiating indeterminate adrenal masses using CT imaging. Abdom Radiol (NY) 2023; 48:3189-3194. [PMID: 37369921 DOI: 10.1007/s00261-023-03988-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 06/12/2023] [Accepted: 06/13/2023] [Indexed: 06/29/2023]
Abstract
PURPOSE Distinguishing stage 1-2 adrenocortical carcinoma (ACC) and large, lipid poor adrenal adenoma (LPAA) via imaging is challenging due to overlapping imaging characteristics. This study investigated the ability of deep learning to distinguish ACC and LPAA on single time-point CT images. METHODS Retrospective cohort study from 1994 to 2022. Imaging studies of patients with adrenal masses who had available adequate CT studies and histology as the reference standard by method of adrenal biopsy and/or adrenalectomy were included as well as four patients with LPAA determined by stability or regression on follow-up imaging. Forty-eight (48) subjects with pathology-proven, stage 1-2 ACC and 43 subjects with adrenal adenoma >3 cm in size demonstrating a mean non-contrast CT attenuation > 20 Hounsfield Units centrally were included. We used annotated single time-point contrast-enhanced CT images of these adrenal masses as input to a 3D Densenet121 model for classifying as ACC or LPAA with five-fold cross-validation. For each fold, two checkpoints were reported, highest accuracy with highest sensitivity (accuracy focused) and highest sensitivity with the highest accuracy (sensitivity focused). RESULTS We trained a deep learning model (3D Densenet121) to predict ACC versus LPAA. The sensitivity-focused model achieved mean accuracy: 87.2% and mean sensitivity: 100%. The accuracy-focused model achieved mean accuracy: 91% and mean sensitivity: 96%. CONCLUSION Deep learning demonstrates promising results distinguishing between ACC and large LPAA using single time-point CT images. Before being widely adopted in clinical practice, multicentric and external validation are needed.
Collapse
Affiliation(s)
- Yashbir Singh
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Zachary S Kelm
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | | | - Dana Erickson
- Division of Endocrinology, Metabolism and Nutrition, Mayo Clinic, Rochester, MN, USA
| | - Tal Yalon
- Department of General Surgery, Mayo Clinic, La Crosse, WI, USA
| | - Irina Bancos
- Division of Endocrinology, Metabolism and Nutrition, Mayo Clinic, Rochester, MN, USA
| | | |
Collapse
|
11
|
Aldughayfiq B, Ashfaq F, Jhanjhi NZ, Humayun M. A Deep Learning Approach for Atrial Fibrillation Classification Using Multi-Feature Time Series Data from ECG and PPG. Diagnostics (Basel) 2023; 13:2442. [PMID: 37510187 PMCID: PMC10377944 DOI: 10.3390/diagnostics13142442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 07/08/2023] [Accepted: 07/13/2023] [Indexed: 07/30/2023] Open
Abstract
Atrial fibrillation is a prevalent cardiac arrhythmia that poses significant health risks to patients. The use of non-invasive methods for AF detection, such as Electrocardiogram and Photoplethysmogram, has gained attention due to their accessibility and ease of use. However, there are challenges associated with ECG-based AF detection, and the significance of PPG signals in this context has been increasingly recognized. The limitations of ECG and the untapped potential of PPG are taken into account as this work attempts to classify AF and non-AF using PPG time series data and deep learning. In this work, we emploted a hybrid deep neural network comprising of 1D CNN and BiLSTM for the task of AF classification. We addressed the under-researched area of applying deep learning methods to transmissive PPG signals by proposing a novel approach. Our approach involved integrating ECG and PPG signals as multi-featured time series data and training deep learning models for AF classification. Our hybrid 1D CNN and BiLSTM model achieved an accuracy of 95% on test data in identifying atrial fibrillation, showcasing its strong performance and reliable predictive capabilities. Furthermore, we evaluated the performance of our model using additional metrics. The precision of our classification model was measured at 0.88, indicating its ability to accurately identify true positive cases of AF. The recall, or sensitivity, was measured at 0.85, illustrating the model's capacity to detect a high proportion of actual AF cases. Additionally, the F1 score, which combines both precision and recall, was calculated at 0.84, highlighting the overall effectiveness of our model in classifying AF and non-AF cases.
Collapse
Affiliation(s)
- Bader Aldughayfiq
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
| | - Farzeen Ashfaq
- School of Computer Science, SCS, Taylor's University, Subang Jaya 47500, Malaysia
| | - N Z Jhanjhi
- School of Computer Science, SCS, Taylor's University, Subang Jaya 47500, Malaysia
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
| |
Collapse
|
12
|
Sharma S. Artificial intelligence for fracture diagnosis in orthopedic X-rays: current developments and future potential. SICOT J 2023; 9:21. [PMID: 37409882 PMCID: PMC10324466 DOI: 10.1051/sicotj/2023018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 06/17/2023] [Indexed: 07/07/2023] Open
Abstract
The use of artificial intelligence (AI) in the interpretation of orthopedic X-rays has shown great potential to improve the accuracy and efficiency of fracture diagnosis. AI algorithms rely on large datasets of annotated images to learn how to accurately classify and diagnose abnormalities. One way to improve AI interpretation of X-rays is to increase the size and quality of the datasets used for training, and to incorporate more advanced machine learning techniques, such as deep reinforcement learning, into the algorithms. Another approach is to integrate AI algorithms with other imaging modalities, such as computed tomography (CT) scans, and magnetic resonance imaging (MRI), to provide a more comprehensive and accurate diagnosis. Recent studies have shown that AI algorithms can accurately detect and classify fractures of the wrist and long bones on X-ray images, demonstrating the potential of AI to improve the accuracy and efficiency of fracture diagnosis. These findings suggest that AI has the potential to significantly improve patient outcomes in the field of orthopedics.
Collapse
Affiliation(s)
- Sanskrati Sharma
- Department of Orthopedics, Royal Preston Hospital Sharoe Green Ln, Fulwood Preston PR2 9HT United Kingdom
| |
Collapse
|
13
|
Ahmed AA, Brychcy A, Abouzid M, Witt M, Kaczmarek E. Perception of Pathologists in Poland of Artificial Intelligence and Machine Learning in Medical Diagnosis-A Cross-Sectional Study. J Pers Med 2023; 13:962. [PMID: 37373951 DOI: 10.3390/jpm13060962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 05/31/2023] [Accepted: 06/04/2023] [Indexed: 06/29/2023] Open
Abstract
BACKGROUND In the past vicennium, several artificial intelligence (AI) and machine learning (ML) models have been developed to assist in medical diagnosis, decision making, and design of treatment protocols. The number of active pathologists in Poland is low, prolonging tumor patients' diagnosis and treatment journey. Hence, applying AI and ML may aid in this process. Therefore, our study aims to investigate the knowledge of using AI and ML methods in the clinical field in pathologists in Poland. To our knowledge, no similar study has been conducted. METHODS We conducted a cross-sectional study targeting pathologists in Poland from June to July 2022. The questionnaire included self-reported information on AI or ML knowledge, experience, specialization, personal thoughts, and level of agreement with different aspects of AI and ML in medical diagnosis. Data were analyzed using IBM® SPSS® Statistics v.26, PQStat Software v.1.8.2.238, and RStudio Build 351. RESULTS Overall, 68 pathologists in Poland participated in our study. Their average age and years of experience were 38.92 ± 8.88 and 12.78 ± 9.48 years, respectively. Approximately 42% used AI or ML methods, which showed a significant difference in the knowledge gap between those who never used it (OR = 17.9, 95% CI = 3.57-89.79, p < 0.001). Additionally, users of AI had higher odds of reporting satisfaction with the speed of AI in the medical diagnosis process (OR = 4.66, 95% CI = 1.05-20.78, p = 0.043). Finally, significant differences (p = 0.003) were observed in determining the liability for legal issues used by AI and ML methods. CONCLUSION Most pathologists in this study did not use AI or ML models, highlighting the importance of increasing awareness and educational programs regarding applying AI and ML in medical diagnosis.
Collapse
Affiliation(s)
- Alhassan Ali Ahmed
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 61-806 Poznan, Poland
- Doctoral School, Poznan University of Medical Sciences, 61-806 Poznan, Poland
| | - Agnieszka Brychcy
- Department of Clinical Patomorphology, Heliodor Swiecicki Clinical Hospital of the Poznan University of Medical Sciences, 61-806 Poznan, Poland
| | - Mohamed Abouzid
- Doctoral School, Poznan University of Medical Sciences, 61-806 Poznan, Poland
- Department of Physical Pharmacy and Pharmacokinetics, Poznan University of Medical Sciences, 60-806 Poznan, Poland
| | - Martin Witt
- Department of Anatomy, Rostock University Medical Centre, 18057 Rostock, Germany
- Department of Anatomy, Technische Universität Dresden, 01307 Dresden, Germany
| | - Elżbieta Kaczmarek
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 61-806 Poznan, Poland
| |
Collapse
|
14
|
Zhang XY, Wei Q, Wu GG, Tang Q, Pan XF, Chen GQ, Zhang D, Dietrich CF, Cui XW. Artificial intelligence - based ultrasound elastography for disease evaluation - a narrative review. Front Oncol 2023; 13:1197447. [PMID: 37333814 PMCID: PMC10272784 DOI: 10.3389/fonc.2023.1197447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 05/22/2023] [Indexed: 06/20/2023] Open
Abstract
Ultrasound elastography (USE) provides complementary information of tissue stiffness and elasticity to conventional ultrasound imaging. It is noninvasive and free of radiation, and has become a valuable tool to improve diagnostic performance with conventional ultrasound imaging. However, the diagnostic accuracy will be reduced due to high operator-dependence and intra- and inter-observer variability in visual observations of radiologists. Artificial intelligence (AI) has great potential to perform automatic medical image analysis tasks to provide a more objective, accurate and intelligent diagnosis. More recently, the enhanced diagnostic performance of AI applied to USE have been demonstrated for various disease evaluations. This review provides an overview of the basic concepts of USE and AI techniques for clinical radiologists and then introduces the applications of AI in USE imaging that focus on the following anatomical sites: liver, breast, thyroid and other organs for lesion detection and segmentation, machine learning (ML) - assisted classification and prognosis prediction. In addition, the existing challenges and future trends of AI in USE are also discussed.
Collapse
Affiliation(s)
- Xian-Ya Zhang
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qi Wei
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ge-Ge Wu
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qi Tang
- Department of Ultrasonography, The First Hospital of Changsha, Changsha, China
| | - Xiao-Fang Pan
- Health Medical Department, Dalian Municipal Central Hospital, Dalian, China
| | - Gong-Quan Chen
- Department of Medical Ultrasound, Minda Hospital of Hubei Minzu University, Enshi, China
| | - Di Zhang
- Department of Medical Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | | | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
15
|
Li Z, Koban KC, Schenck TL, Giunta RE, Li Q, Sun Y. Artificial Intelligence in Dermatology Image Analysis: Current Developments and Future Trends. J Clin Med 2022; 11:jcm11226826. [PMID: 36431301 PMCID: PMC9693628 DOI: 10.3390/jcm11226826] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 10/24/2022] [Accepted: 10/28/2022] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND Thanks to the rapid development of computer-based systems and deep-learning-based algorithms, artificial intelligence (AI) has long been integrated into the healthcare field. AI is also particularly helpful in image recognition, surgical assistance and basic research. Due to the unique nature of dermatology, AI-aided dermatological diagnosis based on image recognition has become a modern focus and future trend. Key scientific concepts of review: The use of 3D imaging systems allows clinicians to screen and label skin pigmented lesions and distributed disorders, which can provide an objective assessment and image documentation of lesion sites. Dermatoscopes combined with intelligent software help the dermatologist to easily correlate each close-up image with the corresponding marked lesion in the 3D body map. In addition, AI in the field of prosthetics can assist in the rehabilitation of patients and help to restore limb function after amputation in patients with skin tumors. THE AIM OF THE STUDY For the benefit of patients, dermatologists have an obligation to explore the opportunities, risks and limitations of AI applications. This study focuses on the application of emerging AI in dermatology to aid clinical diagnosis and treatment, analyzes the current state of the field and summarizes its future trends and prospects so as to help dermatologists realize the impact of new technological innovations on traditional practices so that they can embrace and use AI-based medical approaches more quickly.
Collapse
Affiliation(s)
- Zhouxiao Li
- Department of Plastic and Reconstructive Surgery, Shanghai 9th People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200023, China
- Division of Hand, Plastic and Aesthetic Surgery, University Hospital, LMU Munich, 80339 Munich, Germany
| | | | - Thilo Ludwig Schenck
- Division of Hand, Plastic and Aesthetic Surgery, University Hospital, LMU Munich, 80339 Munich, Germany
| | - Riccardo Enzo Giunta
- Division of Hand, Plastic and Aesthetic Surgery, University Hospital, LMU Munich, 80339 Munich, Germany
| | - Qingfeng Li
- Department of Plastic and Reconstructive Surgery, Shanghai 9th People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200023, China
- Correspondence: (Q.L.); (Y.S.)
| | - Yangbai Sun
- Department of Plastic and Reconstructive Surgery, Shanghai 9th People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200023, China
- Correspondence: (Q.L.); (Y.S.)
| |
Collapse
|
16
|
Depiction of breast cancers on digital mammograms by artificial intelligence-based computer-assisted diagnosis according to cancer characteristics. Eur Radiol 2022; 32:7400-7408. [PMID: 35499564 DOI: 10.1007/s00330-022-08718-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 02/03/2022] [Accepted: 03/02/2022] [Indexed: 01/03/2023]
Abstract
OBJECTIVE To evaluate how breast cancers are depicted by artificial intelligence-based computer-assisted diagnosis (AI-CAD) according to clinical, radiological, and pathological factors. MATERIALS AND METHODS From January 2017 to December 2017, 896 patients diagnosed with 930 breast cancers were enrolled in this retrospective study. Commercial AI-CAD was applied to digital mammograms and abnormality scores were obtained. We evaluated the abnormality score according to clinical, radiological, and pathological characteristics. False-negative results were defined by abnormality scores less than 10. RESULTS The median abnormality score of 930 breasts was 87.4 (range 0-99). The false-negative rate of AI-CAD was 19.4% (180/930). Cancers with an abnormality score of more than 90 showed a high proportion of palpable lesions, BI-RADS 4c and 5 lesions, cancers presenting as mass with or without microcalcifications and invasive cancers compared with low-scored cancers (all p < 0.001). False-negative cancers were more likely to develop in asymptomatic patients and extremely dense breasts and to be diagnosed as occult breast cancers and DCIS compared to detected cancers. CONCLUSION Breast cancers depicted with high abnormality scores by AI-CAD are associated with higher BI-RADS category, invasive pathology, and higher cancer stage. KEY POINTS • High-scored cancers by AI-CAD included a high proportion of BI-RADS 4c and 5 lesions, masses with or without microcalcifications, and cancers with invasive pathology. • Among invasive cancers, cancers with higher T and N stage and HER2-enriched subtype were depicted with higher abnormality scores by AI-CAD. • Cancers missed by AI-CAD tended to be in asymptomatic patients and extremely dense breasts and to be diagnosed as occult breast cancers by radiologists.
Collapse
|
17
|
Rouzrokh P, Khosravi B, Johnson QJ, Faghani S, Vera Garcia DV, Erickson BJ, Maradit Kremers H, Taunton MJ, Wyles CC. Applying Deep Learning to Establish a Total Hip Arthroplasty Radiography Registry: A Stepwise Approach. J Bone Joint Surg Am 2022; 104:1649-1658. [PMID: 35866648 PMCID: PMC9617138 DOI: 10.2106/jbjs.21.01229] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
BACKGROUND Establishing imaging registries for large patient cohorts is challenging because manual labeling is tedious and relying solely on DICOM (digital imaging and communications in medicine) metadata can result in errors. We endeavored to establish an automated hip and pelvic radiography registry of total hip arthroplasty (THA) patients by utilizing deep-learning pipelines. The aims of the study were (1) to utilize these automated pipelines to identify all pelvic and hip radiographs with appropriate annotation of laterality and presence or absence of implants, and (2) to automatically measure acetabular component inclination and version for THA images. METHODS We retrospectively retrieved 846,988 hip and pelvic radiography DICOM files from 20,378 patients who underwent primary or revision THA performed at our institution from 2000 to 2020. Metadata for the files were screened followed by extraction of imaging data. Two deep-learning algorithms (an EfficientNetB3 classifier and a YOLOv5 object detector) were developed to automatically determine the radiographic appearance of all files. Additional deep-learning algorithms were utilized to automatically measure the acetabular angles on anteroposterior pelvic and lateral hip radiographs. Algorithm performance was compared with that of human annotators on a random test sample of 5,000 radiographs. RESULTS Deep-learning algorithms enabled appropriate exclusion of 209,332 DICOM files (24.7%) as misclassified non-hip/pelvic radiographs or having corrupted pixel data. The final registry was automatically curated and annotated in <8 hours and included 168,551 anteroposterior pelvic, 176,890 anteroposterior hip, 174,637 lateral hip, and 117,578 oblique hip radiographs. The algorithms achieved 99.9% accuracy, 99.6% precision, 99.5% recall, and a 99.6% F1 score in determining the radiograph appearance. CONCLUSIONS We developed a highly accurate series of deep-learning algorithms to rapidly curate and annotate THA patient radiographs. This efficient pipeline can be utilized by other institutions or registries to construct radiography databases for patient care, longitudinal surveillance, and large-scale research. The stepwise approach for establishing a radiography registry can further be utilized as a workflow guide for other anatomic areas. LEVEL OF EVIDENCE Diagnostic Level IV . See Instructions for Authors for a complete description of levels of evidence.
Collapse
Affiliation(s)
- Pouria Rouzrokh
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, Rochester, Minnesota
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, Minnesota
| | - Bardia Khosravi
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, Rochester, Minnesota
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, Minnesota
| | - Quinn J Johnson
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, Minnesota
- Mayo Clinic Alix School of Medicine, Mayo Clinic, Rochester, Minnesota
| | - Shahriar Faghani
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Diana V Vera Garcia
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, Rochester, Minnesota
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, Minnesota
| | - Bradley J Erickson
- Radiology Informatics Laboratory, Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | - Hilal Maradit Kremers
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, Minnesota
- Department of Health Sciences Research, Mayo Clinic, Rochester, Minnesota
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota
| | - Michael J Taunton
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, Minnesota
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota
| | - Cody C Wyles
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Mayo Clinic, Rochester, Minnesota
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota
- Department of Clinical Anatomy, Mayo Clinic, Rochester, Minnesota
| |
Collapse
|
18
|
Koteswara Rao Chinnam S, Sistla V, Krishna Kishore Kolli V. Multimodal attention-gated cascaded U-Net model for automatic brain tumor detection and segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
19
|
Wu H, Ye X, Jiang Y, Tian H, Yang K, Cui C, Shi S, Liu Y, Huang S, Chen J, Xu J, Dong F. A Comparative Study of Multiple Deep Learning Models Based on Multi-Input Resolution for Breast Ultrasound Images. Front Oncol 2022; 12:869421. [PMID: 35875151 PMCID: PMC9302001 DOI: 10.3389/fonc.2022.869421] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 05/23/2022] [Indexed: 01/08/2023] Open
Abstract
Purpose The purpose of this study was to explore the performance of different parameter combinations of deep learning (DL) models (Xception, DenseNet121, MobileNet, ResNet50 and EfficientNetB0) and input image resolutions (REZs) (224 × 224, 320 × 320 and 488 × 488 pixels) for breast cancer diagnosis. Methods This multicenter study retrospectively studied gray-scale ultrasound breast images enrolled from two Chinese hospitals. The data are divided into training, validation, internal testing and external testing set. Three-hundreds images were randomly selected for the physician-AI comparison. The Wilcoxon test was used to compare the diagnose error of physicians and models under P=0.05 and 0.10 significance level. The specificity, sensitivity, accuracy, area under the curve (AUC) were used as primary evaluation metrics. Results A total of 13,684 images of 3447 female patients are finally included. In external test the 224 and 320 REZ achieve the best performance in MobileNet and EfficientNetB0 respectively (AUC: 0.893 and 0.907). Meanwhile, 448 REZ achieve the best performance in Xception, DenseNet121 and ResNet50 (AUC: 0.900, 0.883 and 0.871 respectively). In physician-AI test set, the 320 REZ for EfficientNetB0 (AUC: 0.896, P < 0.1) is better than senior physicians. Besides, the 224 REZ for MobileNet (AUC: 0.878, P < 0.1), 448 REZ for Xception (AUC: 0.895, P < 0.1) are better than junior physicians. While the 448 REZ for DenseNet121 (AUC: 0.880, P < 0.05) and ResNet50 (AUC: 0.838, P < 0.05) are only better than entry physicians. Conclusion Based on the gray-scale ultrasound breast images, we obtained the best DL combination which was better than the physicians.
Collapse
Affiliation(s)
- Huaiyu Wu
- Department of Ultrasound, First Clinical College of Jinan University, Second Clinical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen People’s Hospital, Shenzhen, China
| | - Xiuqin Ye
- Department of Ultrasound, First Clinical College of Jinan University, Second Clinical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen People’s Hospital, Shenzhen, China
| | - Yitao Jiang
- Research and Development Department, Microport Prophecy, Shanghai, China
- Research and Development Department, Illuminate Limited Liability Company, Shenzhen, China
| | - Hongtian Tian
- Department of Ultrasound, First Clinical College of Jinan University, Second Clinical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen People’s Hospital, Shenzhen, China
| | - Keen Yang
- Department of Ultrasound, First Clinical College of Jinan University, Second Clinical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen People’s Hospital, Shenzhen, China
| | - Chen Cui
- Research and Development Department, Microport Prophecy, Shanghai, China
- Research and Development Department, Illuminate Limited Liability Company, Shenzhen, China
| | - Siyuan Shi
- Research and Development Department, Microport Prophecy, Shanghai, China
- Research and Development Department, Illuminate Limited Liability Company, Shenzhen, China
| | - Yan Liu
- The Key Laboratory of Cardiovascular Remodeling and Function Research, Chinese Ministry of Education and Chinese Ministry of Health, and The State and Shandong Province Joint Key Laboratory of Translational Cardiovascular Medicine, Cheeloo College of Medicine, Shandong University, Qilu Hospital of Shandong University, Jinan, China
| | - Sijing Huang
- Department of Ultrasound, First Clinical College of Jinan University, Second Clinical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen People’s Hospital, Shenzhen, China
| | - Jing Chen
- Department of Ultrasound, First Clinical College of Jinan University, Second Clinical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen People’s Hospital, Shenzhen, China
| | - Jinfeng Xu
- Department of Ultrasound, First Clinical College of Jinan University, Second Clinical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen People’s Hospital, Shenzhen, China
- *Correspondence: Jinfeng Xu, ; Fajin Dong,
| | - Fajin Dong
- Department of Ultrasound, First Clinical College of Jinan University, Second Clinical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen People’s Hospital, Shenzhen, China
- *Correspondence: Jinfeng Xu, ; Fajin Dong,
| |
Collapse
|
20
|
Shreve JT, Khanani SA, Haddad TC. Artificial Intelligence in Oncology: Current Capabilities, Future Opportunities, and Ethical Considerations. Am Soc Clin Oncol Educ Book 2022; 42:1-10. [PMID: 35687826 DOI: 10.1200/edbk_350652] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
The promise of highly personalized oncology care using artificial intelligence (AI) technologies has been forecasted since the emergence of the field. Cumulative advances across the science are bringing this promise to realization, including refinement of machine learning- and deep learning algorithms; expansion in the depth and variety of databases, including multiomics; and the decreased cost of massively parallelized computational power. Examples of successful clinical applications of AI can be found throughout the cancer continuum and in multidisciplinary practice, with computer vision-assisted image analysis in particular having several U.S. Food and Drug Administration-approved uses. Techniques with emerging clinical utility include whole blood multicancer detection from deep sequencing, virtual biopsies, natural language processing to infer health trajectories from medical notes, and advanced clinical decision support systems that combine genomics and clinomics. Substantial issues have delayed broad adoption, with data transparency and interpretability suffering from AI's "black box" mechanism, and intrinsic bias against underrepresented persons limiting the reproducibility of AI models and perpetuating health care disparities. Midfuture projections of AI maturation involve increasing a model's complexity by using multimodal data elements to better approximate an organic system. Far-future positing includes living databases that accumulate all aspects of a person's health into discrete data elements; this will fuel highly convoluted modeling that can tailor treatment selection, dose determination, surveillance modality and schedule, and more. The field of AI has had a historical dichotomy between its proponents and detractors. The successful development of recent applications, and continued investment in prospective validation that defines their impact on multilevel outcomes, has established a momentum of accelerated progress.
Collapse
Affiliation(s)
| | | | - Tufia C Haddad
- Department of Oncology, Mayo Clinic, Rochester, MN.,Center for Digital Health, Mayo Clinic, Rochester, MN
| |
Collapse
|
21
|
Huang H, Wang FF, Luo S, Chen G, Tang G. Diagnostic performance of radiomics using machine learning algorithms to predict MGMT promoter methylation status in glioma patients: a meta-analysis. DIAGNOSTIC AND INTERVENTIONAL RADIOLOGY (ANKARA, TURKEY) 2021; 27:716-724. [PMID: 34792025 DOI: 10.5152/dir.2021.21153] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
PURPOSE We aimed to assess the diagnostic performance of radiomics using machine learning algorithms to predict the methylation status of the O6-methylguanine-DNA methyltransferase (MGMT) promoter in glioma patients. METHODS A comprehensive literature search of PubMed, EMBASE, and Web of Science until 27 July 2021 was performed to identify eligible studies. Stata SE 15.0 and Meta-Disc 1.4 were used for data analysis. RESULTS A total of fifteen studies with 1663 patients were included: five studies with training and validation cohorts and ten with only training cohorts. The pooled sensitivity and specificity of machine learning for predicting MGMT promoter methylation in gliomas were 85% (95% CI 79%-90%) and 84% (95% CI 78%-88%) in the training cohort (n=15) and 84% (95% CI 70%-92%) and 78% (95% CI 63%-88%) in the validation cohort (n=5). The AUC was 0.91 (95% CI 0.88-0.93) in the training cohort and 0.88 (95% CI 0.85-0.91) in the validation cohort. The meta-regression demonstrated that magnetic resonance imaging sequences were related to heterogeneity. The sensitivity analysis showed that heterogeneity was reduced by excluding one study with the lowest diagnostic performance. CONCLUSION This meta-analysis demonstrated that machine learning is a promising, reliable and repeatable candidate method for predicting MGMT promoter methylation status in glioma and showed a higher performance than non-machine learning methods.
Collapse
Affiliation(s)
- Huan Huang
- Department of Radiology, Affiliated Hospital of Southwest Medical University, Sichuan, China
| | - Fei-Fei Wang
- Department of Radiology, Affiliated Hospital of Southwest Medical University, Sichuan, China
| | - Shigang Luo
- Department of Radiology, Affiliated Hospital of Southwest Medical University, Sichuan, China
| | - Guangxiang Chen
- Department of Radiology, Affiliated Hospital of Southwest Medical University, Sichuan, China
| | - Guangcai Tang
- Department of Radiology, Affiliated Hospital of Southwest Medical University, Sichuan, China
| |
Collapse
|
22
|
O'Shea RJ, Sharkey AR, Cook GJR, Goh V. Systematic review of research design and reporting of imaging studies applying convolutional neural networks for radiological cancer diagnosis. Eur Radiol 2021; 31:7969-7983. [PMID: 33860829 PMCID: PMC8452579 DOI: 10.1007/s00330-021-07881-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Revised: 02/24/2021] [Accepted: 03/12/2021] [Indexed: 11/05/2022]
Abstract
OBJECTIVES To perform a systematic review of design and reporting of imaging studies applying convolutional neural network models for radiological cancer diagnosis. METHODS A comprehensive search of PUBMED, EMBASE, MEDLINE and SCOPUS was performed for published studies applying convolutional neural network models to radiological cancer diagnosis from January 1, 2016, to August 1, 2020. Two independent reviewers measured compliance with the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). Compliance was defined as the proportion of applicable CLAIM items satisfied. RESULTS One hundred eighty-six of 655 screened studies were included. Many studies did not meet the criteria for current design and reporting guidelines. Twenty-seven percent of studies documented eligibility criteria for their data (50/186, 95% CI 21-34%), 31% reported demographics for their study population (58/186, 95% CI 25-39%) and 49% of studies assessed model performance on test data partitions (91/186, 95% CI 42-57%). Median CLAIM compliance was 0.40 (IQR 0.33-0.49). Compliance correlated positively with publication year (ρ = 0.15, p = .04) and journal H-index (ρ = 0.27, p < .001). Clinical journals demonstrated higher mean compliance than technical journals (0.44 vs. 0.37, p < .001). CONCLUSIONS Our findings highlight opportunities for improved design and reporting of convolutional neural network research for radiological cancer diagnosis. KEY POINTS • Imaging studies applying convolutional neural networks (CNNs) for cancer diagnosis frequently omit key clinical information including eligibility criteria and population demographics. • Fewer than half of imaging studies assessed model performance on explicitly unobserved test data partitions. • Design and reporting standards have improved in CNN research for radiological cancer diagnosis, though many opportunities remain for further progress.
Collapse
Affiliation(s)
- Robert J O'Shea
- Cancer Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, 5th floor, Becket House, 1 Lambeth Palace Road, London, SE1 7EU, UK.
| | - Amy Rose Sharkey
- Cancer Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, 5th floor, Becket House, 1 Lambeth Palace Road, London, SE1 7EU, UK
- Department of Radiology, Guy's & St Thomas' NHS Foundation Trust, London, UK
| | - Gary J R Cook
- Cancer Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, 5th floor, Becket House, 1 Lambeth Palace Road, London, SE1 7EU, UK
- King's College London & Guy's and St. Thomas' PET Centre, London, UK
| | - Vicky Goh
- Cancer Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, 5th floor, Becket House, 1 Lambeth Palace Road, London, SE1 7EU, UK
- Department of Radiology, Guy's & St Thomas' NHS Foundation Trust, London, UK
| |
Collapse
|
23
|
Artificial Intelligence in Thyroid Field-A Comprehensive Review. Cancers (Basel) 2021; 13:cancers13194740. [PMID: 34638226 PMCID: PMC8507551 DOI: 10.3390/cancers13194740] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/19/2021] [Accepted: 09/20/2021] [Indexed: 12/12/2022] Open
Abstract
Simple Summary The incidence of thyroid pathologies has been increasing worldwide. Historically, the detection of thyroid neoplasms relies on medical imaging analysis, depending mainly on the experience of clinicians. The advent of artificial intelligence (AI) techniques led to a remarkable progress in image-recognition tasks. AI represents a powerful tool that may facilitate understanding of thyroid pathologies, but actually, the diagnostic accuracy is uncertain. This article aims to provide an overview of the basic aspects, limitations and open issues of the AI methods applied to thyroid images. Medical experts should be familiar with the workflow of AI techniques in order to avoid misleading outcomes. Abstract Artificial intelligence (AI) uses mathematical algorithms to perform tasks that require human cognitive abilities. AI-based methodologies, e.g., machine learning and deep learning, as well as the recently developed research field of radiomics have noticeable potential to transform medical diagnostics. AI-based techniques applied to medical imaging allow to detect biological abnormalities, to diagnostic neoplasms or to predict the response to treatment. Nonetheless, the diagnostic accuracy of these methods is still a matter of debate. In this article, we first illustrate the key concepts and workflow characteristics of machine learning, deep learning and radiomics. We outline considerations regarding data input requirements, differences among these methodologies and their limitations. Subsequently, a concise overview is presented regarding the application of AI methods to the evaluation of thyroid images. We developed a critical discussion concerning limits and open challenges that should be addressed before the translation of AI techniques to the broad clinical use. Clarification of the pitfalls of AI-based techniques results crucial in order to ensure the optimal application for each patient.
Collapse
|
24
|
Rouzrokh P, Wyles CC, Philbrick KA, Ramazanian T, Weston AD, Cai JC, Taunton MJ, Lewallen DG, Berry DJ, Erickson BJ, Kremers HM. A Deep Learning Tool for Automated Radiographic Measurement of Acetabular Component Inclination and Version After Total Hip Arthroplasty. J Arthroplasty 2021; 36:2510-2517.e6. [PMID: 33678445 PMCID: PMC8197739 DOI: 10.1016/j.arth.2021.02.026] [Citation(s) in RCA: 54] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Revised: 02/04/2021] [Accepted: 02/08/2021] [Indexed: 02/02/2023] Open
Abstract
BACKGROUND Inappropriate acetabular component angular position is believed to increase the risk of hip dislocation after total hip arthroplasty. However, manual measurement of these angles is time consuming and prone to interobserver variability. The purpose of this study was to develop a deep learning tool to automate the measurement of acetabular component angles on postoperative radiographs. METHODS Two cohorts of 600 anteroposterior (AP) pelvis and 600 cross-table lateral hip postoperative radiographs were used to develop deep learning models to segment the acetabular component and the ischial tuberosities. Cohorts were manually annotated, augmented, and randomly split to train-validation-test data sets on an 8:1:1 basis. Two U-Net convolutional neural network models (one for AP and one for cross-table lateral radiographs) were trained for 50 epochs. Image processing was then deployed to measure the acetabular component angles on the predicted masks for anatomical landmarks. Performance of the tool was tested on 80 AP and 80 cross-table lateral radiographs. RESULTS The convolutional neural network models achieved a mean Dice similarity coefficient of 0.878 and 0.903 on AP and cross-table lateral test data sets, respectively. The mean difference between human-level and machine-level measurements was 1.35° (σ = 1.07°) and 1.39° (σ = 1.27°) for the inclination and anteversion angles, respectively. Differences of 5⁰ or more between human-level and machine-level measurements were observed in less than 2.5% of cases. CONCLUSION We developed a highly accurate deep learning tool to automate the measurement of angular position of acetabular components for use in both clinical and research settings. LEVEL OF EVIDENCE III.
Collapse
Affiliation(s)
- Pouria Rouzrokh
- Mayo Clinic, Department of Radiology, Radiology Informatics Laboratory, 200 First St. SW, Rochester, MN 55905, USA
| | - Cody C. Wyles
- Department of Health Sciences Research, 200 First St. SW, Rochester, MN 55905, USA
- Department of Orthopedic Surgery. 200 First St. SW, Rochester, MN 55905, USA
| | - Kenneth A. Philbrick
- Mayo Clinic, Department of Radiology, Radiology Informatics Laboratory, 200 First St. SW, Rochester, MN 55905, USA
| | - Taghi Ramazanian
- Department of Health Sciences Research, 200 First St. SW, Rochester, MN 55905, USA
- Department of Orthopedic Surgery. 200 First St. SW, Rochester, MN 55905, USA
| | - Alexander D. Weston
- Department of Health Sciences Research, 200 First St. SW, Rochester, MN 55905, USA
| | - Jason C. Cai
- Mayo Clinic, Department of Radiology, Radiology Informatics Laboratory, 200 First St. SW, Rochester, MN 55905, USA
| | - Michael J. Taunton
- Department of Health Sciences Research, 200 First St. SW, Rochester, MN 55905, USA
- Department of Orthopedic Surgery. 200 First St. SW, Rochester, MN 55905, USA
| | - David G. Lewallen
- Department of Orthopedic Surgery. 200 First St. SW, Rochester, MN 55905, USA
| | - Daniel J. Berry
- Department of Orthopedic Surgery. 200 First St. SW, Rochester, MN 55905, USA
| | - Bradley J. Erickson
- Mayo Clinic, Department of Radiology, Radiology Informatics Laboratory, 200 First St. SW, Rochester, MN 55905, USA
| | - Hilal Maradit Kremers
- Department of Health Sciences Research, 200 First St. SW, Rochester, MN 55905, USA
- Department of Orthopedic Surgery. 200 First St. SW, Rochester, MN 55905, USA
| |
Collapse
|
25
|
Thrall JH, Fessell D, Pandharipande PV. Rethinking the Approach to Artificial Intelligence for Medical Image Analysis: The Case for Precision Diagnosis. J Am Coll Radiol 2021; 18:174-179. [PMID: 33413896 DOI: 10.1016/j.jacr.2020.07.010] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Accepted: 07/07/2020] [Indexed: 02/08/2023]
Abstract
To date, widely generalizable artificial intelligence (AI) programs for medical image analysis have not been demonstrated, including for mammography. Rather than pursuing a strategy of collecting ever-larger databases in the attempt to build generalizable programs, we suggest three possible avenues for exploring a precision medicine or precision imaging approach. First, it is now technologically feasible to collect hundreds of thousands of multi-institutional cases along with other patient data, allowing stratification of patients into subpopulations that have similar characteristics in the manner discussed by the National Research Council in its white paper on precision medicine. A family of AI programs could be developed across different examination types that are matched to specific patient subpopulations. Such stratification can help address bias, including racial or ethnic bias, by allowing unbiased data aggregation for creation of subpopulations. Second, for common examinations, larger institutions may be able to collect enough of their own data to train AI programs that reflect disease prevalence and variety in their respective unique patient subpopulations. Third, high- and low-probability subpopulations can be identified by application of AI programs, thereby allowing their triage off the radiology work list. This would reduce radiologists' workloads, providing more time for interpretation of the remaining examinations. For high-volume procedures, investigators should come together to define reference standards, collect data, and compare the merits of pursuing generalizability versus a precision medicine subpopulation-based strategy.
Collapse
Affiliation(s)
- James H Thrall
- Chair Emeritus, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts.
| | - David Fessell
- Associate Professor, Department of Radiology, University of Michigan, Ann Arbor, Michigan
| | - Pari V Pandharipande
- Director, MGH Institute for Technology Assessment; Associate Chair, Integrated Imaging & Imaging Sciences, MGH Radiology; Executive Director, Clinical Enterprise Integration, Mass General Brigham (MGB) Radiology, Boston, Massachusetts
| |
Collapse
|
26
|
Rouzrokh P, Ramazanian T, Wyles CC, Philbrick KA, Cai JC, Taunton MJ, Kremers HM, Lewallen DG, Erickson BJ. Deep Learning Artificial Intelligence Model for Assessment of Hip Dislocation Risk Following Primary Total Hip Arthroplasty From Postoperative Radiographs. J Arthroplasty 2021; 36:2197-2203.e3. [PMID: 33663890 PMCID: PMC8154724 DOI: 10.1016/j.arth.2021.02.028] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 02/04/2021] [Accepted: 02/08/2021] [Indexed: 02/02/2023] Open
Abstract
BACKGROUND Dislocation is a common complication following total hip arthroplasty (THA), and accounts for a high percentage of subsequent revisions. The purpose of this study is to illustrate the potential of a convolutional neural network model to assess the risk of hip dislocation based on postoperative anteroposterior pelvis radiographs. METHODS We retrospectively evaluated radiographs for a cohort of 13,970 primary THAs with 374 dislocations over 5 years of follow-up. Overall, 1490 radiographs from dislocated and 91,094 from non-dislocated THAs were included in the analysis. A convolutional neural network object detection model (YOLO-V3) was trained to crop the images by centering on the femoral head. A ResNet18 classifier was trained to predict subsequent hip dislocation from the cropped imaging. The ResNet18 classifier was initialized with ImageNet weights and trained using FastAI (V1.0) running on PyTorch. The training was run for 15 epochs using 10-fold cross validation, data oversampling, and augmentation. RESULTS The hip dislocation classifier achieved the following mean performance (standard deviation): accuracy = 49.5 (4.1%), sensitivity = 89.0 (2.2%), specificity = 48.8 (4.2%), positive predictive value = 3.3 (0.3%), negative predictive value = 99.5 (0.1%), and area under the receiver operating characteristic curve = 76.7 (3.6%). Saliency maps demonstrated that the model placed the greatest emphasis on the femoral head and acetabular component. CONCLUSION Existing prediction methods fail to identify patients at high risk of dislocation following THA. Our radiographic classifier model has high sensitivity and negative predictive value, and can be combined with clinical risk factor information for rapid assessment of risk for dislocation following THA. The model further suggests radiographic locations which may be important in understanding the etiology of prosthesis dislocation. Importantly, our model is an illustration of the potential of automated imaging artificial intelligence models in orthopedics. LEVEL OF EVIDENCE Level III.
Collapse
Affiliation(s)
- Pouria Rouzrokh
- Department of Radiology, Radiology Informatics Laboratory, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Taghi Ramazanian
- Department of Health Sciences Research, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
- Department of, Orthopedic Surgery, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Cody C. Wyles
- Department of Health Sciences Research, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
- Department of, Orthopedic Surgery, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Kenneth A. Philbrick
- Department of Radiology, Radiology Informatics Laboratory, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Jason C. Cai
- Department of Radiology, Radiology Informatics Laboratory, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Michael J. Taunton
- Department of Health Sciences Research, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
- Department of, Orthopedic Surgery, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Hilal Maradit Kremers
- Department of Health Sciences Research, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
- Department of, Orthopedic Surgery, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - David G. Lewallen
- Department of, Orthopedic Surgery, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Bradley J. Erickson
- Department of Radiology, Radiology Informatics Laboratory, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| |
Collapse
|
27
|
Artificial Intelligence and Machine Learning in Radiology: Current State and Considerations for Routine Clinical Implementation. Invest Radiol 2021; 55:619-627. [PMID: 32776769 DOI: 10.1097/rli.0000000000000673] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Although artificial intelligence (AI) has been a focus of medical research for decades, in the last decade, the field of radiology has seen tremendous innovation and also public focus due to development and application of machine-learning techniques to develop new algorithms. Interestingly, this innovation is driven simultaneously by academia, existing global medical device vendors, and-fueled by venture capital-recently founded startups. Radiologists find themselves once again in the position to lead this innovation to improve clinical workflows and ultimately patient outcome. However, although the end of today's radiologists' profession has been proclaimed multiple times, routine clinical application of such AI algorithms in 2020 remains rare. The goal of this review article is to describe in detail the relevance of appropriate imaging data as a bottleneck for innovation, provide insights into the many obstacles for technical implementation, and give additional perspectives to radiologists who often view AI solely from their clinical role. As regulatory approval processes for such medical devices are currently under public discussion and the relevance of imaging data is transforming, radiologists need to establish themselves as the leading gatekeepers for evolution of their field and be aware of the many stakeholders and sometimes conflicting interests.
Collapse
|
28
|
Multi-Level Seg-Unet Model with Global and Patch-Based X-ray Images for Knee Bone Tumor Detection. Diagnostics (Basel) 2021; 11:diagnostics11040691. [PMID: 33924426 PMCID: PMC8070216 DOI: 10.3390/diagnostics11040691] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 04/09/2021] [Accepted: 04/09/2021] [Indexed: 12/22/2022] Open
Abstract
Tumor classification and segmentation problems have attracted interest in recent years. In contrast to the abundance of studies examining brain, lung, and liver cancers, there has been a lack of studies using deep learning to classify and segment knee bone tumors. In this study, our objective is to assist physicians in radiographic interpretation to detect and classify knee bone regions in terms of whether they are normal, begin-tumor, or malignant-tumor regions. We proposed the Seg-Unet model with global and patched-based approaches to deal with challenges involving the small size, appearance variety, and uncommon nature of bone lesions. Our model contains classification, tumor segmentation, and high-risk region segmentation branches to learn mutual benefits among the global context on the whole image and the local texture at every pixel. The patch-based model improves our performance in malignant-tumor detection. We built the knee bone tumor dataset supported by the physicians of Chonnam National University Hospital (CNUH). Experiments on the dataset demonstrate that our method achieves better performance than other methods with an accuracy of 99.05% for the classification and an average Mean IoU of 84.84% for segmentation. Our results showed a significant contribution to help the physicians in knee bone tumor detection.
Collapse
|
29
|
Dipaola F, Shiffer D, Gatti M, Menè R, Solbiati M, Furlan R. Machine Learning and Syncope Management in the ED: The Future Is Coming. ACTA ACUST UNITED AC 2021; 57:medicina57040351. [PMID: 33917508 PMCID: PMC8067452 DOI: 10.3390/medicina57040351] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Revised: 03/30/2021] [Accepted: 04/02/2021] [Indexed: 11/16/2022]
Abstract
In recent years, machine learning (ML) has been promisingly applied in many fields of clinical medicine, both for diagnosis and prognosis prediction. Aims of this narrative review were to summarize the basic concepts of ML applied to clinical medicine and explore its main applications in the emergency department (ED) setting, with a particular focus on syncope management. Through an extensive literature search in PubMed and Embase, we found increasing evidence suggesting that the use of ML algorithms can improve ED triage, diagnosis, and risk stratification of many diseases. However, the lacks of external validation and reliable diagnostic standards currently limit their implementation in clinical practice. Syncope represents a challenging problem for the emergency physician both because its diagnosis is not supported by specific tests and the available prognostic tools proved to be inefficient. ML algorithms have the potential to overcome these limitations and, in the future, they could support the clinician in managing syncope patients more efficiently. However, at present only few studies have addressed this issue, albeit with encouraging results.
Collapse
Affiliation(s)
- Franca Dipaola
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, 20090 Milan, Italy; (D.S.); (R.F.)
- Internal Medicine, Humanitas Clinical and Research Center—IRCCS, Rozzano, 20089 Milan, Italy
- Correspondence: ; Tel.: +39-0282247266
| | - Dana Shiffer
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, 20090 Milan, Italy; (D.S.); (R.F.)
| | - Mauro Gatti
- IBM, Active Intelligence Center, 40121 Bologna, Italy;
| | - Roberto Menè
- Department of Medicine and Surgery, University of Milano-Bicocca, 20126 Milan, Italy;
| | - Monica Solbiati
- Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, 20122 Milan, Italy;
- Dipartimento di Scienze Cliniche e di Comunità, Università degli Studi di Milano, 20122 Milan, Italy
| | - Raffaello Furlan
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele, 20090 Milan, Italy; (D.S.); (R.F.)
- Internal Medicine, Humanitas Clinical and Research Center—IRCCS, Rozzano, 20089 Milan, Italy
| |
Collapse
|
30
|
Xie H, Erickson BJ, Sheedy SP, Yin J, Hubbard JM. The diagnosis and outcome of Krukenberg tumors. J Gastrointest Oncol 2021; 12:226-236. [PMID: 34012621 DOI: 10.21037/jgo-20-364] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
Background Accurate diagnostic tools are crucial to distinguish patients with Krukenberg tumors from those with ovarian cancers before decision on initial management. To address this unmet need, we aimed to evaluate the diagnostic utility of clinical, biochemical, and radiographic factors in this patient population. Methods Patients with Krukenberg tumors or primary ovarian cancers were retrospectively identified from institutional cancer registry. Kaplan-Meier method and Cox proportional hazards models were used for survival analysis. Logistic regression evaluated clinical, biochemical, and radiographic factors; residual deep neural network model evaluated features in computed tomography images as predictors to distinguish Krukenberg tumors from ovarian cancers. Model performance was summarized as accuracy and area under the receiver operating characteristic curve (AUC). Results This study included 214 patients with Krukenberg tumors with median age of 52 years. Among 104 (48.6%) patients with colorectal cancer, those who received palliative surgery had significantly higher median overall survival (48.1 versus 30.6 months, P=0.015) and progression-free survival (22.2 versus 6.7 months, P<0.001) than those with medical management only. The accuracy of radiology reports to make either diagnosis of Krukenberg tumors or primary ovarian cancers was 60.7%. In contrast, multivariable logistic regression model with age [odds ratio (OR) 2.98, P<0.001], carbohydrate antigen 125 (OR 1.57, P=0.004), and carcinoembryonic antigen (OR 0.03, P=0.031) had 87.5% [95% confidence interval (CI): 75.0-100.0%] accuracy with AUC 0.96 (95% CI: 0.87-1.00). The neural network model had 62.8% (95% CI: 51.8-74.5%) accuracy with AUC of 0.61 (95% CI: 0.53-0.72). Conclusions We developed a diagnostic model with clinical and biochemical features to distinguish Krukenberg tumors from primary ovarian cancers with promising accuracy.
Collapse
Affiliation(s)
- Hao Xie
- Division of Medical Oncology, Mayo Clinic, Rochester, MN, USA.,Department of Gastrointestinal Oncology, Moffitt Cancer Center, Tampa, FL, USA
| | | | | | - Jun Yin
- Division of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, MN, USA
| | | |
Collapse
|
31
|
|
32
|
Javaheri T, Homayounfar M, Amoozgar Z, Reiazi R, Homayounieh F, Abbas E, Laali A, Radmard AR, Gharib MH, Mousavi SAJ, Ghaemi O, Babaei R, Mobin HK, Hosseinzadeh M, Jahanban-Esfahlan R, Seidi K, Kalra MK, Zhang G, Chitkushev LT, Haibe-Kains B, Malekzadeh R, Rawassizadeh R. CovidCTNet: an open-source deep learning approach to diagnose covid-19 using small cohort of CT images. NPJ Digit Med 2021; 4:29. [PMID: 33603193 PMCID: PMC7893172 DOI: 10.1038/s41746-021-00399-3] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Accepted: 12/10/2020] [Indexed: 12/21/2022] Open
Abstract
Coronavirus disease 2019 (Covid-19) is highly contagious with limited treatment options. Early and accurate diagnosis of Covid-19 is crucial in reducing the spread of the disease and its accompanied mortality. Currently, detection by reverse transcriptase-polymerase chain reaction (RT-PCR) is the gold standard of outpatient and inpatient detection of Covid-19. RT-PCR is a rapid method; however, its accuracy in detection is only ~70-75%. Another approved strategy is computed tomography (CT) imaging. CT imaging has a much higher sensitivity of ~80-98%, but similar accuracy of 70%. To enhance the accuracy of CT imaging detection, we developed an open-source framework, CovidCTNet, composed of a set of deep learning algorithms that accurately differentiates Covid-19 from community-acquired pneumonia (CAP) and other lung diseases. CovidCTNet increases the accuracy of CT imaging detection to 95% compared to radiologists (70%). CovidCTNet is designed to work with heterogeneous and small sample sizes independent of the CT imaging hardware. To facilitate the detection of Covid-19 globally and assist radiologists and physicians in the screening process, we are releasing all algorithms and model parameter details as open-source. Open-source sharing of CovidCTNet enables developers to rapidly improve and optimize services while preserving user privacy and data ownership.
Collapse
Affiliation(s)
- Tahereh Javaheri
- Health Informatics Lab, Metropolitan College, Boston University, Boston, USA
| | - Morteza Homayounfar
- Department of Biomedical Engineering, Amirkabir University of Technology, Tehran, Iran
| | - Zohreh Amoozgar
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, USA
| | - Reza Reiazi
- Princess Margaret Cancer Centre, University of Toronto, Toronto, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Canada
- Department of Medical Physics, School of Medicine, Iran university of Medical Sciences, Tehran, Iran
| | - Fatemeh Homayounieh
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, USA
| | - Engy Abbas
- Joint Department of Medical Imaging, University of Toronto, Toronto, Canada
| | - Azadeh Laali
- Department of Infectious Diseases, Firoozgar Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Amir Reza Radmard
- Department of Radiology, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Mohammad Hadi Gharib
- Department of Radiology and Golestan Rheumatology Research Center, Golestan University of Medical Sciences, Gorgan, Iran
| | | | - Omid Ghaemi
- Department of Radiology, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Rosa Babaei
- Department of Radiology, Iran University of Medical Sciences, Tehran, Iran
| | - Hadi Karimi Mobin
- Department of Radiology, Iran University of Medical Sciences, Tehran, Iran
| | - Mehdi Hosseinzadeh
- Institute of Research and Development, Duy Tan University, Da Nang, Vietnam
- Health Management and Economics Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Rana Jahanban-Esfahlan
- Department of Medical Biotechnology, School of Advanced Medical Sciences, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Khaled Seidi
- Department of Medical Biotechnology, School of Advanced Medical Sciences, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Mannudeep K Kalra
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, USA
| | - Guanglan Zhang
- Health Informatics Lab, Metropolitan College, Boston University, Boston, USA
- Department of Computer Science, Metropolitan College, Boston University, Boston, USA
| | - L T Chitkushev
- Health Informatics Lab, Metropolitan College, Boston University, Boston, USA
- Department of Computer Science, Metropolitan College, Boston University, Boston, USA
| | - Benjamin Haibe-Kains
- Princess Margaret Cancer Centre, University of Toronto, Toronto, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Canada
- Department of Computer Science, University of Toronto, Toronto, ON, Canada
- Ontario Institute for Cancer Research, Toronto, ON, Canada
- Vector Institute for Artificial Intelligence, Toronto, ON, Canada
| | - Reza Malekzadeh
- Digestive Disease Research Center, Tehran University of Medical Sciences, Tehran, Iran
| | - Reza Rawassizadeh
- Health Informatics Lab, Metropolitan College, Boston University, Boston, USA.
- Department of Computer Science, Metropolitan College, Boston University, Boston, USA.
| |
Collapse
|
33
|
Wang X, Li BB. Deep Learning in Head and Neck Tumor Multiomics Diagnosis and Analysis: Review of the Literature. Front Genet 2021; 12:624820. [PMID: 33643386 PMCID: PMC7902873 DOI: 10.3389/fgene.2021.624820] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Accepted: 01/07/2021] [Indexed: 12/24/2022] Open
Abstract
Head and neck tumors are the sixth most common neoplasms. Multiomics integrates multiple dimensions of clinical, pathologic, radiological, and biological data and has the potential for tumor diagnosis and analysis. Deep learning (DL), a type of artificial intelligence (AI), is applied in medical image analysis. Among the DL techniques, the convolution neural network (CNN) is used for image segmentation, detection, and classification and in computer-aided diagnosis. Here, we reviewed multiomics image analysis of head and neck tumors using CNN and other DL neural networks. We also evaluated its application in early tumor detection, classification, prognosis/metastasis prediction, and the signing out of the reports. Finally, we highlighted the challenges and potential of these techniques.
Collapse
Affiliation(s)
- Xi Wang
- Department of Oral Pathology, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing, China
- Research Unit of Precision Pathologic Diagnosis in Tumors of the Oral and Maxillofacial Regions, Chinese Academy of Medical Sciences, Beijing, China
| | - Bin-bin Li
- Department of Oral Pathology, Peking University School and Hospital of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Laboratory for Digital and Material Technology of Stomatology & Beijing Key Laboratory of Digital Stomatology, Beijing, China
- Research Unit of Precision Pathologic Diagnosis in Tumors of the Oral and Maxillofacial Regions, Chinese Academy of Medical Sciences, Beijing, China
| |
Collapse
|
34
|
Bedrikovetski S, Dudi-Venkata NN, Maicas G, Kroon HM, Seow W, Carneiro G, Moore JW, Sammour T. Artificial intelligence for the diagnosis of lymph node metastases in patients with abdominopelvic malignancy: A systematic review and meta-analysis. Artif Intell Med 2021; 113:102022. [PMID: 33685585 DOI: 10.1016/j.artmed.2021.102022] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2020] [Revised: 12/28/2020] [Accepted: 01/10/2021] [Indexed: 12/17/2022]
Abstract
PURPOSE Accurate clinical diagnosis of lymph node metastases is of paramount importance in the treatment of patients with abdominopelvic malignancy. This review assesses the diagnostic performance of deep learning algorithms and radiomics models for lymph node metastases in abdominopelvic malignancies. METHODOLOGY Embase (PubMed, MEDLINE), Science Direct and IEEE Xplore databases were searched to identify eligible studies published between January 2009 and March 2019. Studies that reported on the accuracy of deep learning algorithms or radiomics models for abdominopelvic malignancy by CT or MRI were selected. Study characteristics and diagnostic measures were extracted. Estimates were pooled using random-effects meta-analysis. Evaluation of risk of bias was performed using the QUADAS-2 tool. RESULTS In total, 498 potentially eligible studies were identified, of which 21 were included and 17 offered enough information for a quantitative analysis. Studies were heterogeneous and substantial risk of bias was found in 18 studies. Almost all studies employed radiomics models (n = 20). The single published deep-learning model out-performed radiomics models with a higher AUROC (0.912 vs 0.895), but both radiomics and deep-learning models outperformed the radiologist's interpretation in isolation (0.774). Pooled results for radiomics nomograms amongst tumour subtypes demonstrated the highest AUC 0.895 (95 %CI, 0.810-0.980) for urological malignancy, and the lowest AUC 0.798 (95 %CI, 0.744-0.852) for colorectal malignancy. CONCLUSION Radiomics models improve the diagnostic accuracy of lymph node staging for abdominopelvic malignancies in comparison with radiologist's assessment. Deep learning models may further improve on this, but data remain limited.
Collapse
Affiliation(s)
- Sergei Bedrikovetski
- Discipline of Surgery, Faculty of Health and Medical Science, School of Medicine, University of Adelaide, Adelaide, South Australia, Australia; Colorectal Unit, Department of Surgery, Royal Adelaide Hospital, Adelaide, South Australia, Australia.
| | - Nagendra N Dudi-Venkata
- Discipline of Surgery, Faculty of Health and Medical Science, School of Medicine, University of Adelaide, Adelaide, South Australia, Australia; Colorectal Unit, Department of Surgery, Royal Adelaide Hospital, Adelaide, South Australia, Australia
| | - Gabriel Maicas
- Australian Institute for Machine Learning, School of Computer Science, University of Adelaide, Adelaide, South Australia, Australia
| | - Hidde M Kroon
- Colorectal Unit, Department of Surgery, Royal Adelaide Hospital, Adelaide, South Australia, Australia
| | - Warren Seow
- Discipline of Surgery, Faculty of Health and Medical Science, School of Medicine, University of Adelaide, Adelaide, South Australia, Australia
| | - Gustavo Carneiro
- Australian Institute for Machine Learning, School of Computer Science, University of Adelaide, Adelaide, South Australia, Australia
| | - James W Moore
- Discipline of Surgery, Faculty of Health and Medical Science, School of Medicine, University of Adelaide, Adelaide, South Australia, Australia; Colorectal Unit, Department of Surgery, Royal Adelaide Hospital, Adelaide, South Australia, Australia
| | - Tarik Sammour
- Discipline of Surgery, Faculty of Health and Medical Science, School of Medicine, University of Adelaide, Adelaide, South Australia, Australia; Colorectal Unit, Department of Surgery, Royal Adelaide Hospital, Adelaide, South Australia, Australia
| |
Collapse
|
35
|
Brunk J, Stierle M, Papke L, Revoredo K, Matzner M, Becker J. Cause vs. effect in context-sensitive prediction of business process instances. INFORM SYST 2021. [DOI: 10.1016/j.is.2020.101635] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
36
|
Ezzat D, Hassanien AE, Ella HA. An optimized deep learning architecture for the diagnosis of COVID-19 disease based on gravitational search optimization. Appl Soft Comput 2021; 98:106742. [PMID: 32982615 PMCID: PMC7505822 DOI: 10.1016/j.asoc.2020.106742] [Citation(s) in RCA: 67] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 09/05/2020] [Accepted: 09/17/2020] [Indexed: 12/17/2022]
Abstract
In this paper, a novel approach called GSA-DenseNet121-COVID-19 based on a hybrid convolutional neural network (CNN) architecture is proposed using an optimization algorithm. The CNN architecture that was used is called DenseNet121, and the optimization algorithm that was used is called the gravitational search algorithm (GSA). The GSA is used to determine the best values for the hyperparameters of the DenseNet121 architecture. To help this architecture to achieve a high level of accuracy in diagnosing COVID-19 through chest x-ray images. The obtained results showed that the proposed approach could classify 98.38% of the test set correctly. To test the efficacy of the GSA in setting the optimum values for the hyperparameters of DenseNet121. The GSA was compared to another approach called SSD-DenseNet121, which depends on the DenseNet121 and the optimization algorithm called social ski driver (SSD). The comparison results demonstrated the efficacy of the proposed GSA-DenseNet121-COVID-19. As it was able to diagnose COVID-19 better than SSD-DenseNet121 as the second was able to diagnose only 94% of the test set. The proposed approach was also compared to another method based on a CNN architecture called Inception-v3 and manual search to quantify hyperparameter values. The comparison results showed that the GSA-DenseNet121-COVID-19 was able to beat the comparison method, as the second was able to classify only 95% of the test set samples. The proposed GSA-DenseNet121-COVID-19 was also compared with some related work. The comparison results showed that GSA-DenseNet121-COVID-19 is very competitive.
Collapse
Affiliation(s)
- Dalia Ezzat
- Faculty of Computers and Artificial Intelligence, Cairo University, Egypt
| | | | - Hassan Aboul Ella
- Microbiology Department, Faculty of Veterinary Medicine, Cairo University, Egypt
| |
Collapse
|
37
|
Essentials of a Robust Deep Learning System for Diabetic Retinopathy Screening: A Systematic Literature Review. J Ophthalmol 2020. [DOI: 10.1155/2020/8841927] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
This systematic review was performed to identify the specifics of an optimal diabetic retinopathy deep learning algorithm, by identifying the best exemplar research studies of the field, whilst highlighting potential barriers to clinical implementation of such an algorithm. Searching five electronic databases (Embase, MEDLINE, Scopus, PubMed, and the Cochrane Library) returned 747 unique records on 20 December 2019. Predetermined inclusion and exclusion criteria were applied to the search results, resulting in 15 highest-quality publications. A manual search through the reference lists of relevant review articles found from the database search was conducted, yielding no additional records. A validation dataset of the trained deep learning algorithms was used for creating a set of optimal properties for an ideal diabetic retinopathy classification algorithm. Potential limitations to the clinical implementation of such systems were identified as lack of generalizability, limited screening scope, and data sovereignty issues. It is concluded that deep learning algorithms in the context of diabetic retinopathy screening have reported impressive results. Despite this, the potential sources of limitations in such systems must be evaluated carefully. An ideal deep learning algorithm should be clinic-, clinician-, and camera-agnostic; complying with the local regulation for data sovereignty, storage, privacy, and reporting; whilst requiring minimum human input.
Collapse
|
38
|
Anter AM, Bhattacharyya S, Zhang Z. Multi-stage fuzzy swarm intelligence for automatic hepatic lesion segmentation from CT scans. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106677] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
39
|
Nayantara PV, Kamath S, Manjunath KN, Rajagopal KV. Computer-aided diagnosis of liver lesions using CT images: A systematic review. Comput Biol Med 2020; 127:104035. [PMID: 33099219 DOI: 10.1016/j.compbiomed.2020.104035] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Revised: 10/02/2020] [Accepted: 10/02/2020] [Indexed: 01/17/2023]
Abstract
BACKGROUND Medical image processing has a strong footprint in radio diagnosis for the detection of diseases from the images. Several computer-aided systems were researched in the recent past to assist the radiologist in diagnosing liver diseases and reducing the interpretation time. The aim of this paper is to provide an overview of the state-of-the-art techniques in computer-assisted diagnosis systems to predict benign and malignant lesions using computed tomography images. METHODS The research articles published between 1998 and 2020 obtained from various standard databases were considered for preparing the review. The research papers include both conventional as well as deep learning-based systems for liver lesion diagnosis. The paper initially discusses the various hepatic lesions that are identifiable on computed tomography images, then the computer-aided diagnosis systems and their workflow. The conventional and deep learning-based systems are presented in stages wherein the various methods used for preprocessing, liver and lesion segmentation, radiological feature extraction and classification are discussed. CONCLUSION The review suggests the scope for future, work as efficient and effective segmentation methods that work well with diverse images have not been developed. Furthermore, unsupervised and semi-supervised deep learning models were not investigated for liver disease diagnosis in the reviewed papers. Other areas to be explored include image fusion and inclusion of essential clinical features along with the radiological features for better classification accuracy.
Collapse
Affiliation(s)
- P Vaidehi Nayantara
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India.
| | - Surekha Kamath
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India.
| | - K N Manjunath
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India.
| | - K V Rajagopal
- Department of Radiodiagnosis and Imaging, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, Karnataka, 576104, India.
| |
Collapse
|
40
|
Towards Dynamic Uncertain Causality Graphs for the Intelligent Diagnosis and Treatment of Hepatitis B. Symmetry (Basel) 2020. [DOI: 10.3390/sym12101690] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
Abstract
Hepatitis B is a widespread epidemic in the world, but so far no single drug has been shown to kill or eliminate the Hepatitis B virus and heal people with chronic Hepatitis B virus infection. Based on comprehensive investigations to relevant characteristics of Hepatitis B, a diagnostic modelling and reasoning methodology using Dynamic Uncertain Causality Graph is proposed. The symptoms, physical signs, examinations results, medical histories, etiology, pathogenesis and other factors were included in the diagnosis model. In order to reduce the difficulty of building the model, a modular modeling scheme is proposed, which provides multi-perspectives and arbitrary granularity for the expression of disease causality. The chain reasoning algorithm and weighted logic operation mechanism are introduced to ensure the correctness and effectiveness of diagnostic reasoning under incomplete and uncertain information. In addition, the causal view of the potential interactions between diseases and symptoms visually shows the reasoning process in a graphical way. In the relevant model, the model of the diagnostic process and the model of the therapeutic process are symmetrical. The results show that, even with incomplete observations, the proposed methodology achieves encouraging diagnostic accuracy and effectiveness, providing a promising assistance tool for physicians in the diagnosis of Hepatitis B.
Collapse
|
41
|
Cai JC, Akkus Z, Philbrick KA, Boonrod A, Hoodeshenas S, Weston AD, Rouzrokh P, Conte GM, Zeinoddini A, Vogelsang DC, Huang Q, Erickson BJ. Fully Automated Segmentation of Head CT Neuroanatomy Using Deep Learning. Radiol Artif Intell 2020; 2:e190183. [PMID: 33937839 DOI: 10.1148/ryai.2020190183] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2019] [Revised: 06/02/2020] [Accepted: 06/16/2020] [Indexed: 12/17/2022]
Abstract
Purpose To develop a deep learning model that segments intracranial structures on head CT scans. Materials and Methods In this retrospective study, a primary dataset containing 62 normal noncontrast head CT scans from 62 patients (mean age, 73 years; age range, 27-95 years) acquired between August and December 2018 was used for model development. Eleven intracranial structures were manually annotated on the axial oblique series. The dataset was split into 40 scans for training, 10 for validation, and 12 for testing. After initial training, eight model configurations were evaluated on the validation dataset and the highest performing model was evaluated on the test dataset. Interobserver variability was reported using multirater consensus labels obtained from the test dataset. To ensure that the model learned generalizable features, it was further evaluated on two secondary datasets containing 12 volumes with idiopathic normal pressure hydrocephalus (iNPH) and 30 normal volumes from a publicly available source. Statistical significance was determined using categorical linear regression with P < .05. Results Overall Dice coefficient on the primary test dataset was 0.84 ± 0.05 (standard deviation). Performance ranged from 0.96 ± 0.01 (brainstem and cerebrum) to 0.74 ± 0.06 (internal capsule). Dice coefficients were comparable to expert annotations and exceeded those of existing segmentation methods. The model remained robust on external CT scans and scans demonstrating ventricular enlargement. The use of within-network normalization and class weighting facilitated learning of underrepresented classes. Conclusion Automated segmentation of CT neuroanatomy is feasible with a high degree of accuracy. The model generalized to external CT scans as well as scans demonstrating iNPH.Supplemental material is available for this article.© RSNA, 2020.
Collapse
Affiliation(s)
- Jason C Cai
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Zeynettin Akkus
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Kenneth A Philbrick
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Arunnit Boonrod
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Safa Hoodeshenas
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Alexander D Weston
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Pouria Rouzrokh
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Gian Marco Conte
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Atefeh Zeinoddini
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - David C Vogelsang
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Qiao Huang
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| | - Bradley J Erickson
- Departments of Radiology (J.C.C., K.A.P., S.H., P.R., G.M.C., D.C.V., Q.H., B.J.E.) and Cardiovascular Science (Z.A.), Mayo Clinic Rochester, 200 First St. SW, RO_PB_02_RIL, Rochester, MN 55905; Department of Radiology, Khon Kaen University, Khon Kaen, Thailand (A.B.); Department of Health Sciences Research, Mayo Clinic Florida, Jacksonville, Fla (A.D.W.); and Department of Internal Medicine, Ascension St. John Hospital, Detroit, Mich (A.Z.)
| |
Collapse
|
42
|
Sriporn K, Tsai CF, Tsai CE, Wang P. Analyzing Malaria Disease Using Effective Deep Learning Approach. Diagnostics (Basel) 2020; 10:diagnostics10100744. [PMID: 32987888 PMCID: PMC7601431 DOI: 10.3390/diagnostics10100744] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 09/23/2020] [Accepted: 09/23/2020] [Indexed: 11/16/2022] Open
Abstract
Medical tools used to bolster decision-making by medical specialists who offer malaria treatment include image processing equipment and a computer-aided diagnostic system. Malaria images can be employed to identify and detect malaria using these methods, in order to monitor the symptoms of malaria patients, although there may be atypical cases that need more time for an assessment. This research used 7000 images of Xception, Inception-V3, ResNet-50, NasNetMobile, VGG-16 and AlexNet models for verification and analysis. These are prevalent models that classify the image precision and use a rotational method to improve the performance of validation and the training dataset with convolutional neural network models. Xception, using the state of the art activation function (Mish) and optimizer (Nadam), improved the effectiveness, as found by the outcomes of the convolutional neural model evaluation of these models for classifying the malaria disease from thin blood smear images. In terms of the performance, recall, accuracy, precision, and F1 measure, a combined score of 99.28% was achieved. Consequently, 10% of all non-dataset training and testing images were evaluated utilizing this pattern. Notable aspects for the improvement of a computer-aided diagnostic to produce an optimum malaria detection approach have been found, supported by a 98.86% accuracy level.
Collapse
Affiliation(s)
- Krit Sriporn
- Department of Tropical Agriculture and International Cooperation, National Pingtung University of Science and Technology, Neipu, Pingtung 91201, Taiwan;
- Department of Information Technology, Suratthani Rajabhat University, Suratthani 84100, Thailand
| | - Cheng-Fa Tsai
- Department of Management Information Systems, National Pingtung University of Science and Technology, Pingtung 91201, Taiwan
- Correspondence: ; Tel.: +886-08-770-3202 (ext. 7906)
| | - Chia-En Tsai
- Department of Biochemistry and Molecular Biology, National Cheng Kung University, Tainan 70101, Taiwan;
| | - Paohsi Wang
- Department of Food and Beverage Management, Cheng Shiu University, Kaohsiung 83347, Taiwan;
| |
Collapse
|
43
|
Hurt B, Yen A, Kligerman S, Hsiao A. Augmenting Interpretation of Chest Radiographs With Deep Learning Probability Maps. J Thorac Imaging 2020; 35:285-293. [PMID: 32205817 PMCID: PMC7483166 DOI: 10.1097/rti.0000000000000505] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE Pneumonia is a common clinical diagnosis for which chest radiographs are often an important part of the diagnostic workup. Deep learning has the potential to expedite and improve the clinical interpretation of chest radiographs. While earlier approaches have emphasized the feasibility of "binary classification" to accomplish this task, alternative strategies may be possible. We explore the feasibility of a "semantic segmentation" deep learning approach to highlight the potential foci of pneumonia on frontal chest radiographs. MATERIALS AND METHODS In this retrospective study, we trained a U-net convolutional neural network (CNN) to predict pixel-wise probability maps for pneumonia using a public data set provided by the Radiological Society of North America (RSNA) comprised of 22,000 radiographs and radiologist-defined bounding boxes. We reserved 3684 radiographs as an independent validation data set and assessed overall performance for localization using Dice overlap and classification performance using the area under the receiver-operator characteristic curve. RESULTS For classification/detection of pneumonia, area under the receiver-operator characteristic curve on frontal radiographs was 0.854 with a sensitivity of 82.8% and specificity of 72.6%. Using this strategy of neural network training, probability maps localized pneumonia to lung parenchyma for essentially all validation cases. For segmentation of pneumonia for positive cases, predicted probability maps had a mean Dice score (±SD) of 0.603±0.204, and 60.0% of these had a Dice score >0.5. CONCLUSIONS A "semantic segmentation" deep learning approach can provide a probabilistic map to assist in the diagnosis of pneumonia. In combination with the patient's history, clinical findings and other imaging, this strategy may help expedite and improve diagnosis.
Collapse
|
44
|
Zhang Q, Bu X, Zhang M, Zhang Z, Hu J. Dynamic uncertain causality graph for computer-aided general clinical diagnoses with nasal obstruction as an illustration. Artif Intell Rev 2020. [DOI: 10.1007/s10462-020-09871-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
45
|
Monshi MMA, Poon J, Chung V. Deep learning in generating radiology reports: A survey. Artif Intell Med 2020; 106:101878. [PMID: 32425358 PMCID: PMC7227610 DOI: 10.1016/j.artmed.2020.101878] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Revised: 04/30/2020] [Accepted: 05/10/2020] [Indexed: 12/27/2022]
Abstract
Substantial progress has been made towards implementing automated radiology reporting models based on deep learning (DL). This is due to the introduction of large medical text/image datasets. Generating radiology coherent paragraphs that do more than traditional medical image annotation, or single sentence-based description, has been the subject of recent academic attention. This presents a more practical and challenging application and moves towards bridging visual medical features and radiologist text. So far, the most common approach has been to utilize publicly available datasets and develop DL models that integrate convolutional neural networks (CNN) for image analysis alongside recurrent neural networks (RNN) for natural language processing (NLP) and natural language generation (NLG). This is an area of research that we anticipate will grow in the near future. We focus our investigation on the following critical challenges: understanding radiology text/image structures and datasets, applying DL algorithms (mainly CNN and RNN), generating radiology text, and improving existing DL based models and evaluation metrics. Lastly, we include a critical discussion and future research recommendations. This survey will be useful for researchers interested in DL, particularly those interested in applying DL to radiology reporting.
Collapse
Affiliation(s)
- Maram Mahmoud A Monshi
- School of Computer Science, University of Sydney, Sydney, Australia; Department of Information Technology, Taif University, Taif, Saudi Arabia.
| | - Josiah Poon
- School of Computer Science, University of Sydney, Sydney, Australia
| | - Vera Chung
- School of Computer Science, University of Sydney, Sydney, Australia
| |
Collapse
|
46
|
Krishna AB, Tanveer A, Bhagirath PV, Gannepalli A. Role of artificial intelligence in diagnostic oral pathology-A modern approach. J Oral Maxillofac Pathol 2020; 24:152-156. [PMID: 32508465 PMCID: PMC7269295 DOI: 10.4103/jomfp.jomfp_215_19] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Revised: 09/04/2019] [Accepted: 10/03/2019] [Indexed: 12/13/2022] Open
Abstract
Over the decades, new equipment was emerged in medical field, and we have witnessed the importance of medical imaging such as computed tomography, magnetic resonance imaging, ultrasound, mammography and X-ray and their contribution in successful diagnosis and treatment of various diseases. Now, we are in era of artificial intelligence (AI), where machines were modeled after human brain's ability to take inputs and produce outputs from given data. AI has a wide range of uses and applications in health services industry. Factors such as increase in workload, complexity of work and potential fatigue of doctors may compromise diagnostic ability and outcome. AI components in imaging machines would reduce this workload and drive greater efficiency. They also have access to a greater wealth of data than human counterparts and can detect cancer with more accuracy than humans. This study presented an overview of AI, its recent advances in pathology and future prospects.
Collapse
Affiliation(s)
- Ayinampudi Bhargavi Krishna
- Department of Oral Pathology, Panineeya Mahavidyalaya Institute of Dental Science, Hyderabad, Telangana, India
| | - Azra Tanveer
- Department of Oral Pathology, Panineeya Mahavidyalaya Institute of Dental Science, Hyderabad, Telangana, India
| | - Pancha Venkat Bhagirath
- Department of Oral Pathology, Panineeya Mahavidyalaya Institute of Dental Science, Hyderabad, Telangana, India
| | - Ashalata Gannepalli
- Department of Oral Pathology, Panineeya Mahavidyalaya Institute of Dental Science, Hyderabad, Telangana, India
| |
Collapse
|
47
|
Maggi P, Fartaria MJ, Jorge J, La Rosa F, Absinta M, Sati P, Meuli R, Du Pasquier R, Reich DS, Cuadra MB, Granziera C, Richiardi J, Kober T. CVSnet: A machine learning approach for automated central vein sign assessment in multiple sclerosis. NMR IN BIOMEDICINE 2020; 33:e4283. [PMID: 32125737 PMCID: PMC7754184 DOI: 10.1002/nbm.4283] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Revised: 01/22/2020] [Accepted: 02/05/2020] [Indexed: 05/28/2023]
Abstract
The central vein sign (CVS) is an efficient imaging biomarker for multiple sclerosis (MS) diagnosis, but its application in clinical routine is limited by inter-rater variability and the expenditure of time associated with manual assessment. We describe a deep learning-based prototype for automated assessment of the CVS in white matter MS lesions using data from three different imaging centers. We retrospectively analyzed data from 3 T magnetic resonance images acquired on four scanners from two different vendors, including adults with MS (n = 42), MS mimics (n = 33, encompassing 12 distinct neurological diseases mimicking MS) and uncertain diagnosis (n = 5). Brain white matter lesions were manually segmented on FLAIR* images. Perivenular assessment was performed according to consensus guidelines and used as ground truth, yielding 539 CVS-positive (CVS+ ) and 448 CVS-negative (CVS- ) lesions. A 3D convolutional neural network ("CVSnet") was designed and trained on 47 datasets, keeping 33 for testing. FLAIR* lesion patches of CVS+ /CVS- lesions were used for training and validation (n = 375/298) and for testing (n = 164/150). Performance was evaluated lesion-wise and subject-wise and compared with a state-of-the-art vesselness filtering approach through McNemar's test. The proposed CVSnet approached human performance, with lesion-wise median balanced accuracy of 81%, and subject-wise balanced accuracy of 89% on the validation set, and 91% on the test set. The process of CVS assessment, in previously manually segmented lesions, was ~ 600-fold faster using the proposed CVSnet compared with human visual assessment (test set: 4 seconds vs. 40 minutes). On the validation and test sets, the lesion-wise performance outperformed the vesselness filter method (P < 0.001). The proposed deep learning prototype shows promising performance in differentiating MS from its mimics. Our approach was evaluated using data from different hospitals, enabling larger multicenter trials to evaluate the benefit of introducing the CVS marker into MS diagnostic criteria.
Collapse
Affiliation(s)
- Pietro Maggi
- Department of Neurology, Lausanne University Hospital, Lausanne, Switzerland
- Department of Neurology, Saint-Luc University Hospital, Brussels, Belgium
| | - Mário João Fartaria
- Advanced Clinical Imaging Technology, Siemens Healthcare AG, Lausanne, Switzerland
- Signal Processing Laboratory (LTS5), École Polytechnique Fédéral de Lausanne, Switzerland
| | - João Jorge
- Laboratory for Functional and Metabolic Imaging, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Francesco La Rosa
- Signal Processing Laboratory (LTS5), École Polytechnique Fédéral de Lausanne, Switzerland
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Martina Absinta
- Translational Neuroradiology Section, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD
| | - Pascal Sati
- Translational Neuroradiology Section, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD
| | - Reto Meuli
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Renaud Du Pasquier
- Department of Neurology, Lausanne University Hospital, Lausanne, Switzerland
| | - Daniel S. Reich
- Translational Neuroradiology Section, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD
| | - Meritxell Bach Cuadra
- Signal Processing Laboratory (LTS5), École Polytechnique Fédéral de Lausanne, Switzerland
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- Medical Image Analysis Laboratory (MIAL), Centre d’Imagerie BioMédicale (CIBM), Lausanne, Switzerland
| | - Cristina Granziera
- Neurologic Clinic and Policlinic, Departments of Medicine, Clinical Research and Biomedical Engineering, University Hospital Basel and University of Basel, Basel, Switzerland
- Translational Imaging in Neurology (ThINK) Basel, Department of Medicine and Biomedical Engineering, University Hospital Basel and University of Basel, Basel, Switzerland
| | - Jonas Richiardi
- Advanced Clinical Imaging Technology, Siemens Healthcare AG, Lausanne, Switzerland
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Tobias Kober
- Advanced Clinical Imaging Technology, Siemens Healthcare AG, Lausanne, Switzerland
- Signal Processing Laboratory (LTS5), École Polytechnique Fédéral de Lausanne, Switzerland
| |
Collapse
|
48
|
Analyzing Lung Disease Using Highly Effective Deep Learning Techniques. Healthcare (Basel) 2020; 8:healthcare8020107. [PMID: 32340344 PMCID: PMC7348888 DOI: 10.3390/healthcare8020107] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Revised: 04/14/2020] [Accepted: 04/20/2020] [Indexed: 01/14/2023] Open
Abstract
Image processing technologies and computer-aided diagnosis are medical technologies used to support decision-making processes of radiologists and medical professionals who provide treatment for lung disease. These methods involve using chest X-ray images to diagnose and detect lung lesions, but sometimes there are abnormal cases that take some time to occur. This experiment used 5810 images for training and validation with the MobileNet, Densenet-121 and Resnet-50 models, which are popular networks used to classify the accuracy of images, and utilized a rotational technique to adjust the lung disease dataset to support learning with these convolutional neural network models. The results of the convolutional neural network model evaluation showed that Densenet-121, with a state-of-the-art Mish activation function and Nadam-optimized performance. All the rates for accuracy, recall, precision and F1 measures totaled 98.88%. We then used this model to test 10% of the total images from the non-dataset training and validation. The accuracy rate was 98.97% for the result which provided significant components for the development of a computer-aided diagnosis system to yield the best performance for the detection of lung lesions.
Collapse
|
49
|
Xie L, Yang S, Squirrell D, Vaghefi E. Towards implementation of AI in New Zealand national diabetic screening program: Cloud-based, robust, and bespoke. PLoS One 2020; 15:e0225015. [PMID: 32275656 PMCID: PMC7147747 DOI: 10.1371/journal.pone.0225015] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Accepted: 03/18/2020] [Indexed: 11/18/2022] Open
Abstract
Convolutional Neural Networks (CNNs) have become a prominent method of AI implementation in medical classification tasks. Grading Diabetic Retinopathy (DR) has been at the forefront of the development of AI for ophthalmology. However, major obstacles remain in the generalization of these CNNs onto real-world DR screening programs. We believe these difficulties are due to use of 1) small training datasets (<5,000 images), 2) private and 'curated' repositories, 3) locally implemented CNN implementation methods, while 4) relying on measured Area Under the Curve (AUC) as the sole measure of CNN performance. To address these issues, the public EyePACS Kaggle Diabetic Retinopathy dataset was uploaded onto Microsoft Azure™ cloud platform. Two CNNs were trained; 1 a "Quality Assurance", and 2. a "Classifier". The Diabetic Retinopathy classifier CNN (DRCNN) performance was then tested both on 'un-curated' as well as the 'curated' test set created by the "Quality Assessment" CNN model. Finally, the sensitivity of the DRCNNs was boosted using two post-training techniques. Our DRCNN proved to be robust, as its performance was similar on 'curated' and 'un-curated' test sets. The implementation of 'cascading thresholds' and 'max margin' techniques led to significant improvements in the DRCNN's sensitivity, while also enhancing the specificity of other grades.
Collapse
Affiliation(s)
- Li Xie
- School of Optometry and Vision Sciences, The University of Auckland, Auckland, New Zealand
| | - Song Yang
- School of Optometry and Vision Sciences, The University of Auckland, Auckland, New Zealand
- School of Computer Sciences, The University of Auckland, Auckland, New Zealand
| | - David Squirrell
- Department of Ophthalmology, The University of Auckland, Auckland, New Zealand
- Auckland District Health Board, Auckland, New Zealand
| | - Ehsan Vaghefi
- School of Optometry and Vision Sciences, The University of Auckland, Auckland, New Zealand
- Auckland Bioengineering Institute, The University of Auckland, Auckland, New Zealand
| |
Collapse
|
50
|
Mostapha M, Styner M. Role of deep learning in infant brain MRI analysis. Magn Reson Imaging 2019; 64:171-189. [PMID: 31229667 PMCID: PMC6874895 DOI: 10.1016/j.mri.2019.06.009] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2019] [Revised: 06/06/2019] [Accepted: 06/08/2019] [Indexed: 12/17/2022]
Abstract
Deep learning algorithms and in particular convolutional networks have shown tremendous success in medical image analysis applications, though relatively few methods have been applied to infant MRI data due numerous inherent challenges such as inhomogenous tissue appearance across the image, considerable image intensity variability across the first year of life, and a low signal to noise setting. This paper presents methods addressing these challenges in two selected applications, specifically infant brain tissue segmentation at the isointense stage and presymptomatic disease prediction in neurodevelopmental disorders. Corresponding methods are reviewed and compared, and open issues are identified, namely low data size restrictions, class imbalance problems, and lack of interpretation of the resulting deep learning solutions. We discuss how existing solutions can be adapted to approach these issues as well as how generative models seem to be a particularly strong contender to address them.
Collapse
Affiliation(s)
- Mahmoud Mostapha
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States of America.
| | - Martin Styner
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States of America; Neuro Image Research and Analysis Lab, Department of Psychiatry, University of North Carolina at Chapel Hill, NC 27599, United States of America.
| |
Collapse
|