1
|
Abbott NL, Chauvie S, Marcu L, DeJean C, Melidis C, Wientjes R, Gasnier A, Lisbona A, Luzzara M, Mazzoni LN, O'Doherty J, Koutsouveli E, Appelt A, Hansen CR. The role of medical physics experts in clinical trials: A guideline from the European Federation of Organisations for Medical Physics. Phys Med 2024; 126:104821. [PMID: 39361978 DOI: 10.1016/j.ejmp.2024.104821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 08/26/2024] [Accepted: 09/22/2024] [Indexed: 10/05/2024] Open
Abstract
The EFOMP working group on the Role of Medical Physics Experts (MPEs) in Clinical Trials was established in 2010, with experts from across Europe and different areas of medical physics. Their main aims were: (1) To develop a consensus guidance document for the work MPEs do in clinical trials across Europe. (2) Complement the work by American colleagues in AAPM TG 113 and guidance from National Member Organisations. (3) To cover external beam radiotherapy, brachytherapy, nuclear medicine, molecular radiotherapy, and imaging. This document outlines the main output from this working group. Giving guidance to MPEs, and indeed all Medical Physicists (MP) and MP trainees wishing to work in clinical trials. It also gives guidance to the wider multidisciplinary team, advising where MPEs must legally be involved, as well as highlighting areas where MPEs skills and expertise can really add value to clinical trials.
Collapse
Affiliation(s)
- Natalie Louise Abbott
- King George V Building, St. Bartholomews Hospital, West Smithfield, London EC1A 7BE, UK; National RTTQA Group, Cardiff & London, UK.
| | - Stephane Chauvie
- Medical Physics Division, Santa Croce e Carle Hospital, Cuneo, Italy
| | - Loredana Marcu
- Faculty of Informatics and Science, University of Oradea, Oradea 410087, Romania; UniSA Allied Health & Human Performance, University of South Australia, Adelaide SA 5001, Australia
| | | | - Christos Melidis
- CAP Santé, Radiation Therapy, Clinique Maymard. Bastia, France; milliVolt.eu, a Health Physics Company. Bastia, France
| | | | - Anne Gasnier
- Department of Radiation Oncology, Henri Becquerel Cancer Centre, Rouen, France
| | - Albert Lisbona
- MP emeritus, Institut de Cancérologie de l'Ouest, Saint Herblain, France
| | | | | | - Jim O'Doherty
- Siemens Medical Solutions, Malvern, PA, United States; Radiography & Diagnostic Imaging, University College Dublin, Dublin, Ireland; Department of Radiology & Radiological Sciences, Medical University of South Carolina, Charleston, SC, United States
| | - Efi Koutsouveli
- Department of Medical Physics, Hygeia Hospital, Athens, Greece
| | - Ane Appelt
- Leeds Institution of Medical Research at St James's, University of Leeds, Leeds, UK; Department of Medical Physics, Leeds Cancer Centre, Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | - Christian Rønn Hansen
- Institute of Clinical Research, University of Southern Denmark, Denmark; Danish Center of Particle Therapy, Aarhus University Hospital, Denmark; Department of Oncology, Odense University Hospital, Denmark
| |
Collapse
|
2
|
Lavalle S, Lechien JR, Chiesa-Estomba C, Parisi FM, Maniaci A. Evaluating AI in patient education: The need for a validated performance assessment tool. Am J Otolaryngol 2024; 45:104442. [PMID: 39096820 DOI: 10.1016/j.amjoto.2024.104442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Revised: 04/18/2024] [Accepted: 04/18/2024] [Indexed: 08/05/2024]
Affiliation(s)
- Salvatore Lavalle
- Faculty of Medicine and Surgery, University of Enna Kore, 94100 Enna, Italy.
| | - Jerome R Lechien
- Head & Neck Study Group, Young-Otolaryngologists of the International Federations of Oto-Rhino-Laryngological Societies (YO-IFOS), 13005 Marseille, France; Department of Human Anatomy and Experimental Oncology, UMONS Research Institute for Health Sciences and Technology, University of Mons (UMons), Mons, Belgium.
| | - Carlos Chiesa-Estomba
- Head & Neck Study Group, Young-Otolaryngologists of the International Federations of Oto-Rhino-Laryngological Societies (YO-IFOS), 13005 Marseille, France; Department of Otorhinolaryngology - Head and Neck Surgery, Donostia University Hospital, San Sebastian, Spain
| | - Federica Maria Parisi
- Department of Medical and Surgical Sciences and Advanced Technologies "GF Ingrassia", ENT Section, University of Catania, Via S. Sofia, 78, 95125 Catania, Italy
| | - Antonino Maniaci
- Faculty of Medicine and Surgery, University of Enna Kore, 94100 Enna, Italy; Head & Neck Study Group, Young-Otolaryngologists of the International Federations of Oto-Rhino-Laryngological Societies (YO-IFOS), 13005 Marseille, France.
| |
Collapse
|
3
|
Urso L, Cittanti C, Manco L, Ortolan N, Borgia F, Malorgio A, Scribano G, Mastella E, Guidoboni M, Stefanelli A, Turra A, Bartolomei M. ML Models Built Using Clinical Parameters and Radiomic Features Extracted from 18F-Choline PET/CT for the Prediction of Biochemical Recurrence after Metastasis-Directed Therapy in Patients with Oligometastatic Prostate Cancer. Diagnostics (Basel) 2024; 14:1264. [PMID: 38928679 PMCID: PMC11202947 DOI: 10.3390/diagnostics14121264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Revised: 06/09/2024] [Accepted: 06/12/2024] [Indexed: 06/28/2024] Open
Abstract
Oligometastatic patients at [18F]F-Fluorocholine (18F-choline) PET/CT may be treated with metastasis-directed therapy (MDT). The aim of this study was to combine radiomic parameters extracted from 18F-choline PET/CT and clinical data to build machine learning (ML) models able to predict MDT efficacy. METHODS Oligorecurrent patients (≤5 lesions) at 18F-choline PET/CT and treated with MDT were collected. A per-patient and per-lesion analysis was performed, using 2-year biochemical recurrence (BCR) after MDT as the standard of reference. Clinical parameters and radiomic features (RFts) extracted from 18F-choline PET/CT were used for training five ML Models for both CT and PET images. The performance metrics were calculated (i.e., Area Under the Curve-AUC; Classification Accuracy-CA). RESULTS A total of 46 metastases were selected and segmented in 29 patients. BCR after MDT occurred in 20 (69%) patients after 2 years of follow-up. In total, 73 and 33 robust RFTs were selected from CT and PET datasets, respectively. PET ML Models showed better performances than CT Models for discriminating BCR after MDT, with Stochastic Gradient Descent (SGD) being the best model (AUC = 0.95; CA = 0.90). CONCLUSION ML Models built using clinical parameters and CT and PET RFts extracted via 18F-choline PET/CT can accurately predict BCR after MDT in oligorecurrent PCa patients. If validated externally, ML Models could improve the selection of oligorecurrent PCa patients for treatment with MDT.
Collapse
Affiliation(s)
- Luca Urso
- Department of Translational Medicine, University of Ferrara, 44121 Ferrara, Italy; (L.U.); (C.C.); (N.O.); (F.B.); (M.G.)
- Nuclear Medicine Unit, Onco-Hematology Department, University Hospital of Ferrara, 44124 Ferrara, Italy;
| | - Corrado Cittanti
- Department of Translational Medicine, University of Ferrara, 44121 Ferrara, Italy; (L.U.); (C.C.); (N.O.); (F.B.); (M.G.)
- Nuclear Medicine Unit, Onco-Hematology Department, University Hospital of Ferrara, 44124 Ferrara, Italy;
| | - Luigi Manco
- Medical Physics Unit, University Hospital of Ferrara, 44124 Ferrara, Italy; (E.M.); (A.T.)
| | - Naima Ortolan
- Department of Translational Medicine, University of Ferrara, 44121 Ferrara, Italy; (L.U.); (C.C.); (N.O.); (F.B.); (M.G.)
- Nuclear Medicine Unit, Onco-Hematology Department, University Hospital of Ferrara, 44124 Ferrara, Italy;
| | - Francesca Borgia
- Department of Translational Medicine, University of Ferrara, 44121 Ferrara, Italy; (L.U.); (C.C.); (N.O.); (F.B.); (M.G.)
- Nuclear Medicine Unit, Onco-Hematology Department, University Hospital of Ferrara, 44124 Ferrara, Italy;
| | - Antonio Malorgio
- U.O.C. Radiotherapy, University Hospital of Ferrara, 44124 Ferrara, Italy; (A.M.); (A.S.)
| | - Giovanni Scribano
- Department of Physics and Earth Science, University of Ferrara, 44121 Ferrara, Italy;
| | - Edoardo Mastella
- Medical Physics Unit, University Hospital of Ferrara, 44124 Ferrara, Italy; (E.M.); (A.T.)
| | - Massimo Guidoboni
- Department of Translational Medicine, University of Ferrara, 44121 Ferrara, Italy; (L.U.); (C.C.); (N.O.); (F.B.); (M.G.)
- U.O.C. Clinical Oncology, University Hospital of Ferrara, 44124 Ferrara, Italy
| | - Antonio Stefanelli
- U.O.C. Radiotherapy, University Hospital of Ferrara, 44124 Ferrara, Italy; (A.M.); (A.S.)
| | - Alessandro Turra
- Medical Physics Unit, University Hospital of Ferrara, 44124 Ferrara, Italy; (E.M.); (A.T.)
| | - Mirco Bartolomei
- Nuclear Medicine Unit, Onco-Hematology Department, University Hospital of Ferrara, 44124 Ferrara, Italy;
| |
Collapse
|
4
|
Maniaci A, Fakhry N, Chiesa-Estomba C, Lechien JR, Lavalle S. Synergizing ChatGPT and general AI for enhanced medical diagnostic processes in head and neck imaging. Eur Arch Otorhinolaryngol 2024; 281:3297-3298. [PMID: 38353768 DOI: 10.1007/s00405-024-08511-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Accepted: 01/24/2024] [Indexed: 05/03/2024]
Affiliation(s)
- Antonino Maniaci
- Faculty of Medicine and Surgery, University of Enna Kore, 94100, Enna, Italy
- Head & Neck Study Group, Young-Otolaryngologists of the International Federations of Oto-Rhino-Laryngological Societies (YO-IFOS), 13005, Marseille, France
| | - Nicolas Fakhry
- Department of Otolaryngology, Head & Neck Surgery, Aix-Marseille University, AP-HM, La Conception Hospital, 147, Boulevard Baille, 13005, Marseille, France
- Head & Neck Study Group, Young-Otolaryngologists of the International Federations of Oto-Rhino-Laryngological Societies (YO-IFOS), 13005, Marseille, France
| | - Carlos Chiesa-Estomba
- Head & Neck Study Group, Young-Otolaryngologists of the International Federations of Oto-Rhino-Laryngological Societies (YO-IFOS), 13005, Marseille, France
- Department of Otorhinolaryngology, Head and Neck Surgery, Donostia University Hospital, San Sebastian, Spain
| | - Jerome R Lechien
- Head & Neck Study Group, Young-Otolaryngologists of the International Federations of Oto-Rhino-Laryngological Societies (YO-IFOS), 13005, Marseille, France
- Department of Human Anatomy and Experimental Oncology, UMONS Research Institute for Health Sciences and Technology, University of Mons (UMons), Mons, Belgium
| | - Salvatore Lavalle
- Faculty of Medicine and Surgery, University of Enna Kore, 94100, Enna, Italy.
| |
Collapse
|
5
|
Jeong Y, Jeong C, Sung KY, Moon G, Lim J. Development of AI-Based Diagnostic Algorithm for Nasal Bone Fracture Using Deep Learning. J Craniofac Surg 2024; 35:29-32. [PMID: 38294297 DOI: 10.1097/scs.0000000000009856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 10/08/2023] [Indexed: 02/01/2024] Open
Abstract
Facial bone fractures are relatively common, with the nasal bone the most frequently fractured facial bone. Computed tomography is the gold standard for diagnosing such fractures. Most nasal bone fractures can be treated using a closed reduction. However, delayed diagnosis may cause nasal deformity or other complications that are difficult and expensive to treat. In this study, the authors developed an algorithm for diagnosing nasal fractures by learning computed tomography images of facial bones with artificial intelligence through deep learning. A significant concordance with human doctors' reading results of 100% sensitivity and 77% specificity was achieved. Herein, the authors report the results of a pilot study on the first stage of developing an algorithm for analyzing fractures in the facial bone.
Collapse
Affiliation(s)
- Yeonjin Jeong
- Department of Plastic and Reconstructive Surgery, National Medical Center, Seoul, Korea
| | - Chanho Jeong
- Department of Plastic and Reconstructive Surgery, Kangwon National University Hospital, Kangwon-do, Korea
| | - Kun-Yong Sung
- Department of Plastic and Reconstructive Surgery, Kangwon National University Hospital, Kangwon-do, Korea
| | - Gwiseong Moon
- Department of Computer Science and Engineering, Kangwon National University, Kangwon-do, Korea
| | - Jinsoo Lim
- Department of Plastic and Reconstructive Surgery, College of Medicine, The Catholic University of Korea, St. Vincent's Hospital, Gyeonggi-do, Korea
| |
Collapse
|
6
|
Manco L, Albano D, Urso L, Arnaboldi M, Castellani M, Florimonte L, Guidi G, Turra A, Castello A, Panareo S. Positron Emission Tomography-Derived Radiomics and Artificial Intelligence in Multiple Myeloma: State-of-the-Art. J Clin Med 2023; 12:7669. [PMID: 38137738 PMCID: PMC10743775 DOI: 10.3390/jcm12247669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 12/02/2023] [Accepted: 12/09/2023] [Indexed: 12/24/2023] Open
Abstract
Multiple myeloma (MM) is a heterogeneous neoplasm accounting for the second most prevalent hematologic disorder. The identification of noninvasive, valuable biomarkers is of utmost importance for the best patient treatment selection, especially in heterogeneous diseases like MM. Despite molecular imaging with positron emission tomography (PET) has achieved a primary role in the characterization of MM, it is not free from shortcomings. In recent years, radiomics and artificial intelligence (AI), which includes machine learning (ML) and deep learning (DL) algorithms, have played an important role in mining additional information from medical images beyond human eyes' resolving power. Our review provides a summary of the current status of radiomics and AI in different clinical contexts of MM. A systematic search of PubMed, Web of Science, and Scopus was conducted, including all the articles published in English that explored radiomics and AI analyses of PET/CT images in MM. The initial results have highlighted the potential role of such new features in order to improve the clinical stratification of MM patients, as well as to increase their clinical benefits. However, more studies are warranted before these approaches can be implemented in clinical routines.
Collapse
Affiliation(s)
- Luigi Manco
- Medical Physics Unit, Azienda USL of Ferrara, 45100 Ferrara, Italy; (L.M.); (A.T.)
| | - Domenico Albano
- Nuclear Medicine Department, University of Brescia and ASST Spedali Civili di Brescia, 25123 Brescia, Italy;
| | - Luca Urso
- Department of Translational Medicine, University of Ferrara, 44121 Ferrara, Italy;
| | - Mattia Arnaboldi
- Nuclear Medicine Unit, Fondazione IRCCS Ca’ Granda, Ospedale Maggiore Policlinico, 20122 Milan, Italy; (M.A.); (M.C.); (L.F.)
| | - Massimo Castellani
- Nuclear Medicine Unit, Fondazione IRCCS Ca’ Granda, Ospedale Maggiore Policlinico, 20122 Milan, Italy; (M.A.); (M.C.); (L.F.)
| | - Luigia Florimonte
- Nuclear Medicine Unit, Fondazione IRCCS Ca’ Granda, Ospedale Maggiore Policlinico, 20122 Milan, Italy; (M.A.); (M.C.); (L.F.)
| | - Gabriele Guidi
- Medical Physics Unit, University Hospital of Modena, 41125 Modena, Italy;
| | - Alessandro Turra
- Medical Physics Unit, Azienda USL of Ferrara, 45100 Ferrara, Italy; (L.M.); (A.T.)
| | - Angelo Castello
- Nuclear Medicine Unit, Fondazione IRCCS Ca’ Granda, Ospedale Maggiore Policlinico, 20122 Milan, Italy; (M.A.); (M.C.); (L.F.)
| | - Stefano Panareo
- Nuclear Medicine Unit, Department of Oncology and Hematology, University Hospital of Modena, Via del Pozzo 71, 41124 Modena, Italy;
| |
Collapse
|
7
|
Horwitz V, Cohen M, Gore A, Gez R, Gutman H, Kadar T, Dachir S, Kendler S. Predicting clinical outcome of sulfur mustard induced ocular injury using machine learning model. Exp Eye Res 2023; 236:109671. [PMID: 37776992 DOI: 10.1016/j.exer.2023.109671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 09/23/2023] [Accepted: 09/27/2023] [Indexed: 10/02/2023]
Abstract
The sight-threatening sulfur mustard (SM) induced ocular injury presents specific symptoms in each clinical stage. The acute injury develops in all exposed eyes and may heal or deteriorate into chronic late pathology. Early detection of eyes at risk of developing late pathology may assist in providing unique monitoring and specific treatments only to relevant cases. In this study, we evaluated a machine-learning (ML) model for predicting the development of SM-induced late pathology based on clinical data of the acute phase in the rabbit model. Clinical data from 166 rabbit eyes exposed to SM vapor was used retrospectively. The data included a comprehensive clinical evaluation of the cornea, eyelids and conjunctiva using a semi-quantitative clinical score. A random forest classifier ML model, was trained to predict the development of corneal neovascularization four weeks post-ocular exposure to SM vapor using clinical scores recorded three weeks earlier. The overall accuracy in predicting the clinical outcome of SM-induced ocular injury was 73%. The accuracy in identifying eyes at risk of developing corneal neovascularization and future healed eyes was 75% and 59%, respectively. The most important parameters for accurate prediction were conjunctival secretion and corneal opacity at 1w and corneal erosions at 72 h post-exposure. Predicting the clinical outcome of SM-induced ocular injury based on the acute injury parameters using ML is demonstrated for the first time. Although the prediction accuracy was limited, probably due to the small dataset, it pointed out towards various parameters during the acute injury that are important for predicting SM-induced late pathology and revealing possible pathological mechanisms.
Collapse
Affiliation(s)
- Vered Horwitz
- Department of Pharmacology, Israel Institute for Biological Research, Ness Ziona, 74100, Israel.
| | - Maayan Cohen
- Department of Pharmacology, Israel Institute for Biological Research, Ness Ziona, 74100, Israel
| | - Ariel Gore
- Department of Pharmacology, Israel Institute for Biological Research, Ness Ziona, 74100, Israel
| | - Rellie Gez
- Department of Pharmacology, Israel Institute for Biological Research, Ness Ziona, 74100, Israel
| | - Hila Gutman
- Department of Pharmacology, Israel Institute for Biological Research, Ness Ziona, 74100, Israel
| | - Tamar Kadar
- Department of Pharmacology, Israel Institute for Biological Research, Ness Ziona, 74100, Israel
| | - Shlomit Dachir
- Department of Pharmacology, Israel Institute for Biological Research, Ness Ziona, 74100, Israel
| | - Shai Kendler
- Department of Environmental Physics, Israel Institute for Biological Research, Ness Ziona, 74100, Israel; Faculty of Civil & Environmental Engineering, Technion-Israeli Institute of Technology, Haifa, 320000, Israel
| |
Collapse
|
8
|
Sahu A, Das PK, Meher S. Recent advancements in machine learning and deep learning-based breast cancer detection using mammograms. Phys Med 2023; 114:103138. [PMID: 37914431 DOI: 10.1016/j.ejmp.2023.103138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 07/22/2023] [Accepted: 09/14/2023] [Indexed: 11/03/2023] Open
Abstract
OBJECTIVE Mammogram-based automatic breast cancer detection has a primary role in accurate cancer diagnosis and treatment planning to save valuable lives. Mammography is one basic yet efficient test for screening breast cancer. Very few comprehensive surveys have been presented to briefly analyze methods for detecting breast cancer with mammograms. In this article, our objective is to give an overview of recent advancements in machine learning (ML) and deep learning (DL)-based breast cancer detection systems. METHODS We give a structured framework to categorize mammogram-based breast cancer detection techniques. Several publicly available mammogram databases and different performance measures are also mentioned. RESULTS After deliberate investigation, we find most of the works classify breast tumors either as normal-abnormal or malignant-benign rather than classifying them into three classes. Furthermore, DL-based features are more significant than hand-crafted features. However, transfer learning is preferred over others as it yields better performance in small datasets, unlike classical DL techniques. SIGNIFICANCE AND CONCLUSION In this article, we have made an attempt to give recent advancements in artificial intelligence (AI)-based breast cancer detection systems. Furthermore, a number of challenging issues and possible research directions are mentioned, which will help researchers in further scopes of research in this field.
Collapse
Affiliation(s)
- Adyasha Sahu
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India.
| | - Pradeep Kumar Das
- School of Electronics Engineering (SENSE), VIT Vellore, Tamil Nadu, 632014, India.
| | - Sukadev Meher
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India.
| |
Collapse
|
9
|
McGale J, Khurana S, Huang A, Roa T, Yeh R, Shirini D, Doshi P, Nakhla A, Bebawy M, Khalil D, Lotfalla A, Higgins H, Gulati A, Girard A, Bidard FC, Champion L, Duong P, Dercle L, Seban RD. PET/CT and SPECT/CT Imaging of HER2-Positive Breast Cancer. J Clin Med 2023; 12:4882. [PMID: 37568284 PMCID: PMC10419459 DOI: 10.3390/jcm12154882] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 07/19/2023] [Accepted: 07/23/2023] [Indexed: 08/13/2023] Open
Abstract
HER2 (Human Epidermal Growth Factor Receptor 2)-positive breast cancer is characterized by amplification of the HER2 gene and is associated with more aggressive tumor growth, increased risk of metastasis, and poorer prognosis when compared to other subtypes of breast cancer. HER2 expression is therefore a critical tumor feature that can be used to diagnose and treat breast cancer. Moving forward, advances in HER2 in vivo imaging, involving the use of techniques such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT), may allow for a greater role for HER2 status in guiding the management of breast cancer patients. This will apply both to patients who are HER2-positive and those who have limited-to-minimal immunohistochemical HER2 expression (HER2-low), with imaging ultimately helping clinicians determine the size and location of tumors. Additionally, PET and SPECT could help evaluate effectiveness of HER2-targeted therapies, such as trastuzumab or pertuzumab for HER2-positive cancers, and specially modified antibody drug conjugates (ADC), such as trastuzumab-deruxtecan, for HER2-low variants. This review will explore the current and future role of HER2 imaging in personalizing the care of patients diagnosed with breast cancer.
Collapse
Affiliation(s)
- Jeremy McGale
- Department of Radiology, Columbia University Medical Center, New York, NY 10032, USA
| | - Sakshi Khurana
- Department of Radiology, Columbia University Medical Center, New York, NY 10032, USA
| | - Alice Huang
- Department of Radiology, Columbia University Medical Center, New York, NY 10032, USA
| | - Tina Roa
- Department of Radiology, Columbia University Medical Center, New York, NY 10032, USA
| | - Randy Yeh
- Molecular Imaging and Therapy Service, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Dorsa Shirini
- School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran 1985717443, Iran
| | - Parth Doshi
- Campbell University School of Osteopathic Medicine, Lillington, NC 27546, USA
| | - Abanoub Nakhla
- American University of the Caribbean School of Medicine, Cupecoy, Sint Maarten
| | - Maria Bebawy
- Touro College of Osteopathic Medicine, Middletown, NY 10940, USA
| | - David Khalil
- Campbell University School of Osteopathic Medicine, Lillington, NC 27546, USA
| | - Andrew Lotfalla
- Touro College of Osteopathic Medicine, Middletown, NY 10940, USA
| | - Hayley Higgins
- Touro College of Osteopathic Medicine, Middletown, NY 10940, USA
| | - Amit Gulati
- Department of Internal Medicine, Maimonides Medical Center, New York, NY 11219, USA
| | - Antoine Girard
- Department of Nuclear Medicine, CHU Amiens-Picardie, 80054 Amiens, France
| | - Francois-Clement Bidard
- Department of Medical Oncology, Inserm CIC-BT 1428, Curie Institute, Paris Saclay University, UVSQ, 78035 Paris, France
| | - Laurence Champion
- Department of Nuclear Medicine and Endocrine Oncology, Institut Curie, 92210 Saint-Cloud, France
- Laboratory of Translational Imaging in Oncology, Paris Sciences et Lettres (PSL) Research University, Institut Curie, 91401 Orsay, France
| | - Phuong Duong
- Department of Radiology, Columbia University Medical Center, New York, NY 10032, USA
| | - Laurent Dercle
- Department of Radiology, Columbia University Medical Center, New York, NY 10032, USA
| | - Romain-David Seban
- Department of Nuclear Medicine and Endocrine Oncology, Institut Curie, 92210 Saint-Cloud, France
- Laboratory of Translational Imaging in Oncology, Paris Sciences et Lettres (PSL) Research University, Institut Curie, 91401 Orsay, France
| |
Collapse
|
10
|
Pacurari AC, Bhattarai S, Muhammad A, Avram C, Mederle AO, Rosca O, Bratosin F, Bogdan I, Fericean RM, Biris M, Olaru F, Dumitru C, Tapalaga G, Mavrea A. Diagnostic Accuracy of Machine Learning AI Architectures in Detection and Classification of Lung Cancer: A Systematic Review. Diagnostics (Basel) 2023; 13:2145. [PMID: 37443539 DOI: 10.3390/diagnostics13132145] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/19/2023] [Accepted: 06/21/2023] [Indexed: 07/15/2023] Open
Abstract
The application of artificial intelligence (AI) in diagnostic imaging has gained significant interest in recent years, particularly in lung cancer detection. This systematic review aims to assess the accuracy of machine learning (ML) AI algorithms in lung cancer detection, identify the ML architectures currently in use, and evaluate the clinical relevance of these diagnostic imaging methods. A systematic search of PubMed, Web of Science, Cochrane, and Scopus databases was conducted in February 2023, encompassing the literature published up until December 2022. The review included nine studies, comprising five case-control studies, three retrospective cohort studies, and one prospective cohort study. Various ML architectures were analyzed, including artificial neural network (ANN), entropy degradation method (EDM), probabilistic neural network (PNN), support vector machine (SVM), partially observable Markov decision process (POMDP), and random forest neural network (RFNN). The ML architectures demonstrated promising results in detecting and classifying lung cancer across different lesion types. The sensitivity of the ML algorithms ranged from 0.81 to 0.99, while the specificity varied from 0.46 to 1.00. The accuracy of the ML algorithms ranged from 77.8% to 100%. The AI architectures were successful in differentiating between malignant and benign lesions and detecting small-cell lung cancer (SCLC) and non-small-cell lung cancer (NSCLC). This systematic review highlights the potential of ML AI architectures in the detection and classification of lung cancer, with varying levels of diagnostic accuracy. Further studies are needed to optimize and validate these AI algorithms, as well as to determine their clinical relevance and applicability in routine practice.
Collapse
Affiliation(s)
| | - Sanket Bhattarai
- KIST Medical College, Faculty of General Medicine, Imadol Marg, Lalitpur 44700, Nepal
| | - Abdullah Muhammad
- Islamic International Medical College, Faculty of General Medicine, 41 7th Ave, 46000 Islamabad, Pakistan
| | - Claudiu Avram
- Doctoral School, "Victor Babes" University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania
| | - Alexandru Ovidiu Mederle
- Department of Surgery, "Victor Babes" University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania
| | - Ovidiu Rosca
- Department of Infectious Diseases, "Victor Babes" University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania
| | - Felix Bratosin
- Doctoral School, "Victor Babes" University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania
- Department of Infectious Diseases, "Victor Babes" University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania
| | - Iulia Bogdan
- Doctoral School, "Victor Babes" University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania
- Department of Infectious Diseases, "Victor Babes" University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania
| | - Roxana Manuela Fericean
- Doctoral School, "Victor Babes" University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania
- Department of Infectious Diseases, "Victor Babes" University of Medicine and Pharmacy Timisoara, 300041 Timisoara, Romania
| | - Marius Biris
- Department of Obstetrics and Gynecology, "Victor Babes" University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square 2, 300041 Timisoara, Romania
| | - Flavius Olaru
- Department of Obstetrics and Gynecology, "Victor Babes" University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square 2, 300041 Timisoara, Romania
| | - Catalin Dumitru
- Department of Obstetrics and Gynecology, "Victor Babes" University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square 2, 300041 Timisoara, Romania
| | - Gianina Tapalaga
- Department of Odontotherapy and Endodontics, Faculty of Dental Medicine, "Victor Babes" University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square 2, 300041 Timisoara, Romania
| | - Adelina Mavrea
- Department of Internal Medicine I, Cardiology Clinic, "Victor Babes" University of Medicine and Pharmacy Timisoara, Eftimie Murgu Square 2, 300041 Timisoara, Romania
| |
Collapse
|
11
|
Malhotra R, Singh Saini B, Gupta S. An interpretable feature-learned model for overall survival classification of High-Grade Gliomas. Phys Med 2023; 110:102591. [PMID: 37126962 DOI: 10.1016/j.ejmp.2023.102591] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 02/23/2023] [Accepted: 04/13/2023] [Indexed: 05/03/2023] Open
Abstract
PURPOSE An accurate and well-defined survival prediction of High Grade Gliomas (HGGs) is indispensable because of its high incidence and aggressiveness. Therefore, this paper presents a unified framework for fully automatic overall survival classification and its interpretation. METHODS AND MATERIALS Initially, a glioma detection model is utilized to detect the tumorous images. A pre-processing module is designed for extracting 2D slices and creating a survival data array for the classification network. Then, the classification pipeline is integrated with two separate pathways: a modality-specific and a modality-concatenated pathway. The modality-specific pathway runs three separate CNNs for extracting rich predictive features from three sub-regions of HGGs (peritumoral edema, enhancing tumor and necrosis) by using three neuro-imaging modalities. In these pathways, the image vectors of the different modalities are also concatenated to the final fusion layer to overcome the loss of lower-level tumor features. Furthermore, to exploit the intra-modality correlations, a modality-concatenated pathway is also added to the classification pipeline. The experiments are conducted on BraTS 2018 and BraTS 2019 benchmarks, demonstrating that the proposed approach performs competitively in classifying HGG patients into three survival groups, namely, short, mid, and long survivors. RESULTS The proposed approach achieves an overall classification accuracy, sensitivity, and specificity of about 0.998, 0.997, and 0.999, respectively, for the BraTS 2018 dataset, and for BraTS 2019, these values correspond to 1.000, 0.999, and 0.999. CONCLUSIONS The results indicate that the proposed model achieves the highest values of the evaluation metrics for the overall survival classification of HGG.
Collapse
Affiliation(s)
- Radhika Malhotra
- Department of Electronics and Communication, Dr B R Ambedkar National Institute of Technology, Jalandhar, Punjab 144011, India.
| | - Barjinder Singh Saini
- Department of Electronics and Communication, Dr B R Ambedkar National Institute of Technology, Jalandhar, Punjab 144011, India
| | - Savita Gupta
- Department of Computer Science and Engg., UIET, Sector 25, Panjab University, Chandigarh 160023, India
| |
Collapse
|
12
|
Yi Z, Wang J, Li M. Deep image and feature prior algorithm based on U-ConformerNet structure. Phys Med 2023; 107:102535. [PMID: 36764130 DOI: 10.1016/j.ejmp.2023.102535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 01/04/2023] [Accepted: 01/25/2023] [Indexed: 02/10/2023] Open
Abstract
PURPOSE The reconstruction performance of the deep image prior (DIP) approach is limited by the conventional convolutional layer structure and it is difficult to enhance its potential. In order to improve the quality of image reconstruction and suppress artifacts, we propose a DIP algorithm with better performance, and verify its superiority in the latest case. METHODS We construct a new U-ConformerNet structure as the DIP algorithm's network, replacing the traditional convolutional layer-based U-net structure, and introduce the 'lpips' deep network based feature distance regularization method. Our algorithm can switch between supervised and unsupervised modes at will to meet different needs. RESULTS The reconstruction was performed on the low dose CT dataset (LoDoPaB). Our algorithm attained a PSNR of more than 35 dB under unsupervised conditions, and the PSNR under the supervised condition is greater than 36 dB. Both of which are better than the performance of the DIP-TV. Furthermore, the accuracy of this method is positively connected with the quality of the a priori image with the help of deep networks. In terms of noise eradication and artifact suppression, the DIP algorithm with U-ConformerNet structure outperforms the standard DIP method based on convolutional structure. CONCLUSIONS It is known by experimental verification that, in unsupervised mode, the algorithm improves the output PSNR by at least 2-3 dB when compared to the DIP-TV algorithm (proposed in 2020). In supervised mode, our algorithm approaches that of the state-of-the-art end-to-end deep learning algorithms.
Collapse
Affiliation(s)
- Zhengming Yi
- The State Key Laboratory of Refractories and Metallurgy, Wuhan University of Science and Technology, Wuhan 430081, Hubei, China; Key Laboratory for Ferrous Metallurgy and Resources Utilization of Metallurgy and Resources Utilization of Education, Wuhan University of Science and Technology, Wuhan 430081, Hubei, China
| | - Junjie Wang
- The State Key Laboratory of Refractories and Metallurgy, Wuhan University of Science and Technology, Wuhan 430081, Hubei, China; Key Laboratory for Ferrous Metallurgy and Resources Utilization of Metallurgy and Resources Utilization of Education, Wuhan University of Science and Technology, Wuhan 430081, Hubei, China
| | - Mingjie Li
- The State Key Laboratory of Refractories and Metallurgy, Wuhan University of Science and Technology, Wuhan 430081, Hubei, China; Key Laboratory for Ferrous Metallurgy and Resources Utilization of Metallurgy and Resources Utilization of Education, Wuhan University of Science and Technology, Wuhan 430081, Hubei, China.
| |
Collapse
|
13
|
EBHI: A new Enteroscope Biopsy Histopathological H&E Image Dataset for image classification evaluation. Phys Med 2023; 107:102534. [PMID: 36804696 DOI: 10.1016/j.ejmp.2023.102534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 08/30/2022] [Accepted: 01/25/2023] [Indexed: 02/19/2023] Open
Abstract
BACKGROUND AND PURPOSE Colorectal cancer has become the third most common cancer worldwide, accounting for approximately 10% of cancer patients. Early detection of the disease is important for the treatment of colorectal cancer patients. Histopathological examination is the gold standard for screening colorectal cancer. However, the current lack of histopathological image datasets of colorectal cancer, especially enteroscope biopsies, hinders the accurate evaluation of computer-aided diagnosis techniques. Therefore, a multi-category colorectal cancer dataset is needed to test various medical image classification methods to find high classification accuracy and strong robustness. METHODS A new publicly available Enteroscope Biopsy Histopathological H&E Image Dataset (EBHI) is published in this paper. To demonstrate the effectiveness of the EBHI dataset, we have utilized several machine learning, convolutional neural networks and novel transformer-based classifiers for experimentation and evaluation, using an image with a magnification of 200×. RESULTS Experimental results show that the deep learning method performs well on the EBHI dataset. Classical machine learning methods achieve maximum accuracy of 76.02% and deep learning method achieves a maximum accuracy of 95.37%. CONCLUSION To the best of our knowledge, EBHI is the first publicly available colorectal histopathology enteroscope biopsy dataset with four magnifications and five types of images of tumor differentiation stages, totaling 5532 images. We believe that EBHI could attract researchers to explore new classification algorithms for the automated diagnosis of colorectal cancer, which could help physicians and patients in clinical settings.
Collapse
|
14
|
Hertel M, Liu C, Song H, Golatta M, Kappler S, Nanke R, Radicke M, Maier A, Rose G. Clinical prototype implementation enabling an improved day-to-day mammography compression. Phys Med 2023; 106:102524. [PMID: 36641900 DOI: 10.1016/j.ejmp.2023.102524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 12/22/2022] [Accepted: 01/02/2023] [Indexed: 01/15/2023] Open
Abstract
PURPOSE In mammography, breast compression is achieved by lowering a compression paddle on the breast. Despite the directive that compression is needed, there is no concrete guideline on its execution. To estimate the degree of compression, current mammography units only provide compression force and breast thickness as parameters. Therefore, radiographers could be induced to mainly determine the level of compression based on compression force and apply the same value to all breast sizes. In this case, smaller breast sizes are exposed to higher pressure. This results in a highly varying perception of discomfort or even pain during the procedure, depending on the breast size. METHODS To overcome this imbalance, current research results suggest that pressure might be a more qualified parameter for a more uniform compression among all breast sizes. To utilize pressure, the contact area between breast and compression paddle must be determined. In this paper, we present an easy-to-implement prototype enabling a real-time pressure-based measure without the need of direct patient contact. Using an optical camera, the contact area between the breast and the compression paddle is automatically segmented by a deep learning model. RESULTS The model provides a mean pixel accuracy of 96.7% (SD: 2.3%), mean frequency-weighted intersection over union of 88.5% (SD: 6.3%), and a Dice score of 93.6% (SD: 2.2%). The subsequent pressure display is updated more than five times per second which enables the use in clinical routines to set the compression level. CONCLUSION This prototype could help guiding to an improved breast compression routine in mammography procedures.
Collapse
Affiliation(s)
- Madeleine Hertel
- Siemens Healthcare GmbH, 91301 Forchheim, Germany; Institute for Medical Engineering and Research Campus STIMULATE, Otto-von-Guericke-University, 39106 Magdeburg, Germany.
| | - Chang Liu
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nuremberg, 91058 Erlangen, Germany.
| | - Haobo Song
- Siemens Healthcare GmbH, 91301 Forchheim, Germany.
| | - Michael Golatta
- University Breast Unit, Department of Gynecology and Obstetrics, 69120 Heidelberg, Germany.
| | | | - Ralf Nanke
- Siemens Healthcare GmbH, 91301 Forchheim, Germany.
| | | | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander University Erlangen-Nuremberg, 91058 Erlangen, Germany.
| | - Georg Rose
- Institute for Medical Engineering and Research Campus STIMULATE, Otto-von-Guericke-University, 39106 Magdeburg, Germany.
| |
Collapse
|
15
|
Thong LT, Chou HS, Chew HSJ, Lau Y. Diagnostic test accuracy of artificial intelligence-based imaging for lung cancer screening: A systematic review and meta-analysis. Lung Cancer 2023; 176:4-13. [PMID: 36566582 DOI: 10.1016/j.lungcan.2022.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 12/04/2022] [Accepted: 12/08/2022] [Indexed: 12/23/2022]
Abstract
BACKGROUND Lung cancer is the principal cause of cancer-related deaths worldwide. Early detection of lung cancer with screening is indispensable to reduce the high morbidity and mortality rates. Artificial intelligence (AI) is widely utilised in healthcare, including in the assessment of medical images. A growing number of reviews studied the application of AI in lung cancer screening, but no overarching meta-analysis has examined the diagnostic test accuracy (DTA) of AI-based imaging for lung cancer screening. OBJECTIVE To systematically review the DTA of AI-based imaging for lung cancer screening. METHODS PubMed, EMBASE, Cochrane Library, CINAHL, IEEE Xplore, Web of Science, ACM Digital Library, Scopus, PsycINFO, and ProQuest Dissertations and Theses were searched from inception to date. Studies that were published in English and that evaluated the performance of AI-based imaging for lung cancer screening were included. Two independent reviewers screened titles and abstracts and used the Quality Assessment of Diagnostic Accuracy Studies-2 tool to appraise the quality of selected studies. Grading of Recommendations Assessment, Development, and Evaluation to diagnostic tests was used to assess the certainty of evidence. RESULTS Twenty-six studies with 150,721 imaging data were included. Hierarchical summary receiver-operating characteristic model used for meta-analysis demonstrated that the pooled sensitivity for AI-based imaging for lung cancer screening was 94.6 % (95 % CI: 91.4 % to 96.7 %) and specificity was 93.6 % (95 % CI: 88.5 % to 96.6 %). Subgroup analyses revealed that similar results were found among different types of AI, region, data source, and year of publication, but the overall quality of evidence was very low. CONCLUSION AI-based imaging could effectively detect lung cancer and be incorporated into lung cancer screening programs. Further high-quality DTA studies on large lung cancer screening populations are required to validate AI's role in early lung cancer detection.
Collapse
Affiliation(s)
- Lay Teng Thong
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.
| | - Hui Shan Chou
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.
| | - Han Shi Jocelyn Chew
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.
| | - Ying Lau
- Alice Lee Centre for Nursing Studies, Yong Loo Lin School of Medicine, National University of Singapore, Singapore.
| |
Collapse
|
16
|
Wu Y, Liu J, White GM, Deng J. Image-based motion artifact reduction on liver dynamic contrast enhanced MRI. Phys Med 2023; 105:102509. [PMID: 36565556 DOI: 10.1016/j.ejmp.2022.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Revised: 10/13/2022] [Accepted: 12/12/2022] [Indexed: 12/24/2022] Open
Abstract
Liver MRI images often suffer from degraded quality due to ghosting or blurring artifacts caused by patient respiratory or bulk motion. In this study, we developed a two-stage deep learning model to reduce motion artifact on dynamic contrast enhanced (DCE) liver MRIs. The stage-I network utilized a deep residual network with a densely connected multi-resolution block (DRN-DCMB) network to remove most motion artifacts. The stage-II network applied the generative adversarial network (GAN) and perceptual loss compensation to preserve image structural features. The stage-I network served as the generator of GAN and its pretrained parameters in stage-I were further updated via backpropagation during stage-II training. The stage-I network was trained using small image patches with simulated motion artifacts including image-space rotational and translational motion, and K-space based centric and interleaved linear motion, sinusoidal, and rotational motion to mimic liver motion patterns. The stage-II network training used full-size images with the same types of simulated motion. The liver DCE-MRI image volumes without obvious motion artifacts in 10 patients were used for the training process, of which 1020 images of 8 patients were used for training and 240 images of 2 patients for validation. Finally, the whole two-stage deep learning model was tested with simulated motion images (312 clean images from 5 test patients) and patient images with real motion artifacts (28 motion images from 12 patients). The resulted images after two-stage processing demonstrated reduced motion artifacts while preserved anatomic details without image blurriness, with SSIM of 0.935 ± 0.092, MSE of 60.7 ± 9.0 × 10-3, and PSNR of 32.054 ± 2.219.
Collapse
Affiliation(s)
- Yunan Wu
- Department of Electrical Computer Engineering, Northwestern University, 633 Clark Street, Evanston, IL 60208, USA; Department of Diagnostic Radiology, Rush University Medical Center, 1653 W. Congress Pkwy, Jelke Ste 181, Chicago, IL 60612, USA.
| | - Junchi Liu
- Medical Imaging Research Center and Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL 60616, USA.
| | - Gregory M White
- Department of Diagnostic Radiology, Rush University Medical Center, 1653 W. Congress Pkwy, Jelke Ste 181, Chicago, IL 60612, USA.
| | - Jie Deng
- Department of Diagnostic Radiology, Rush University Medical Center, 1653 W. Congress Pkwy, Jelke Ste 181, Chicago, IL 60612, USA; Department of Radiation Oncology, UT Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX 75235, USA.
| |
Collapse
|
17
|
Beyond Imaging and Genetic Signature in Glioblastoma: Radiogenomic Holistic Approach in Neuro-Oncology. Biomedicines 2022; 10:biomedicines10123205. [PMID: 36551961 PMCID: PMC9775324 DOI: 10.3390/biomedicines10123205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/02/2022] [Accepted: 12/05/2022] [Indexed: 12/14/2022] Open
Abstract
Glioblastoma (GBM) is a malignant brain tumor exhibiting rapid and infiltrative growth, with less than 10% of patients surviving over 5 years, despite aggressive and multimodal treatments. The poor prognosis and the lack of effective pharmacological treatments are imputable to a remarkable histological and molecular heterogeneity of GBM, which has led, to date, to the failure of precision oncology and targeted therapies. Identification of molecular biomarkers is a paradigm for comprehensive and tailored treatments; nevertheless, biopsy sampling has proved to be invasive and limited. Radiogenomics is an emerging translational field of research aiming to study the correlation between radiographic signature and underlying gene expression. Although a research field still under development, not yet incorporated into routine clinical practice, it promises to be a useful non-invasive tool for future personalized/adaptive neuro-oncology. This review provides an up-to-date summary of the recent advancements in the use of magnetic resonance imaging (MRI) radiogenomics for the assessment of molecular markers of interest in GBM regarding prognosis and response to treatments, for monitoring recurrence, also providing insights into the potential efficacy of such an approach for survival prognostication. Despite a high sensitivity and specificity in almost all studies, accuracy, reproducibility and clinical value of radiomic features are the Achilles heel of this newborn tool. Looking into the future, investigators' efforts should be directed towards standardization and a disciplined approach to data collection, algorithms, and statistical analysis.
Collapse
|
18
|
Urso L, Manco L, Castello A, Evangelista L, Guidi G, Castellani M, Florimonte L, Cittanti C, Turra A, Panareo S. PET-Derived Radiomics and Artificial Intelligence in Breast Cancer: A Systematic Review. Int J Mol Sci 2022; 23:13409. [PMID: 36362190 PMCID: PMC9653918 DOI: 10.3390/ijms232113409] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 10/27/2022] [Accepted: 10/28/2022] [Indexed: 08/13/2023] Open
Abstract
Breast cancer (BC) is a heterogeneous malignancy that still represents the second cause of cancer-related death among women worldwide. Due to the heterogeneity of BC, the correct identification of valuable biomarkers able to predict tumor biology and the best treatment approaches are still far from clear. Although molecular imaging with positron emission tomography/computed tomography (PET/CT) has improved the characterization of BC, these methods are not free from drawbacks. In recent years, radiomics and artificial intelligence (AI) have been playing an important role in the detection of several features normally unseen by the human eye in medical images. The present review provides a summary of the current status of radiomics and AI in different clinical settings of BC. A systematic search of PubMed, Web of Science and Scopus was conducted, including all articles published in English that explored radiomics and AI analyses of PET/CT images in BC. Several studies have demonstrated the potential role of such new features for the staging and prognosis as well as the assessment of biological characteristics. Radiomics and AI features appear to be promising in different clinical settings of BC, although larger prospective trials are needed to confirm and to standardize this evidence.
Collapse
Affiliation(s)
- Luca Urso
- Department of Translational Medicine, University of Ferrara, Via Aldo Moro 8, 44124 Ferrara, Italy
- Nuclear Medicine Unit, Oncological Medical and Specialist Department, University Hospital of Ferrara, 44124 Cona, Italy
| | - Luigi Manco
- Medical Physics Unit, Azienda USL of Ferrara, 44124 Ferrara, Italy
- Medical Physics Unit, University Hospital of Ferrara, 44124 Cona, Italy
| | - Angelo Castello
- Nuclear Medicine Unit, Fondazione IRCCS Ca’ Granda, Ospedale Maggiore Policlinico, 20122 Milan, Italy
| | - Laura Evangelista
- Department of Medicine DIMED, University of Padua, 35128 Padua, Italy
| | - Gabriele Guidi
- Medical Physics Unit, University Hospital of Modena, 41125 Modena, Italy
| | - Massimo Castellani
- Nuclear Medicine Unit, Fondazione IRCCS Ca’ Granda, Ospedale Maggiore Policlinico, 20122 Milan, Italy
| | - Luigia Florimonte
- Nuclear Medicine Unit, Fondazione IRCCS Ca’ Granda, Ospedale Maggiore Policlinico, 20122 Milan, Italy
| | - Corrado Cittanti
- Department of Translational Medicine, University of Ferrara, Via Aldo Moro 8, 44124 Ferrara, Italy
- Nuclear Medicine Unit, Oncological Medical and Specialist Department, University Hospital of Ferrara, 44124 Cona, Italy
| | - Alessandro Turra
- Medical Physics Unit, University Hospital of Ferrara, 44124 Cona, Italy
| | - Stefano Panareo
- Nuclear Medicine Unit, Oncology and Haematology Department, University Hospital of Modena, 41125 Modena, Italy
| |
Collapse
|
19
|
Xu HL, Gong TT, Liu FH, Chen HY, Xiao Q, Hou Y, Huang Y, Sun HZ, Shi Y, Gao S, Lou Y, Chang Q, Zhao YH, Gao QL, Wu QJ. Artificial intelligence performance in image-based ovarian cancer identification: A systematic review and meta-analysis. EClinicalMedicine 2022; 53:101662. [PMID: 36147628 PMCID: PMC9486055 DOI: 10.1016/j.eclinm.2022.101662] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 08/25/2022] [Accepted: 08/30/2022] [Indexed: 11/28/2022] Open
Abstract
BACKGROUND Accurate identification of ovarian cancer (OC) is of paramount importance in clinical treatment success. Artificial intelligence (AI) is a potentially reliable assistant for the medical imaging recognition. We systematically review articles on the diagnostic performance of AI in OC from medical imaging for the first time. METHODS The Medline, Embase, IEEE, PubMed, Web of Science, and the Cochrane library databases were searched for related studies published until August 1, 2022. Inclusion criteria were studies that developed or used AI algorithms in the diagnosis of OC from medical images. The binary diagnostic accuracy data were extracted to derive the outcomes of interest: sensitivity (SE), specificity (SP), and Area Under the Curve (AUC). The study was registered with the PROSPERO, CRD42022324611. FINDINGS Thirty-four eligible studies were identified, of which twenty-eight studies were included in the meta-analysis with a pooled SE of 88% (95%CI: 85-90%), SP of 85% (82-88%), and AUC of 0.93 (0.91-0.95). Analysis for different algorithms revealed a pooled SE of 89% (85-92%) and SP of 88% (82-92%) for machine learning; and a pooled SE of 88% (84-91%) and SP of 84% (80-87%) for deep learning. Acceptable diagnostic performance was demonstrated in subgroup analyses stratified by imaging modalities (Ultrasound, Magnetic Resonance Imaging, or Computed Tomography), sample size (≤300 or >300), AI algorithms versus clinicians, year of publication (before or after 2020), geographical distribution (Asia or non Asia), and the different risk of bias levels (≥3 domain low risk or < 3 domain low risk). INTERPRETATION AI algorithms exhibited favorable performance for the diagnosis of OC through medical imaging. More rigorous reporting standards that address specific challenges of AI research could improve future studies. FUNDING This work was supported by the Natural Science Foundation of China (No. 82073647 to Q-JW and No. 82103914 to T-TG), LiaoNing Revitalization Talents Program (No. XLYC1907102 to Q-JW), and 345 Talent Project of Shengjing Hospital of China Medical University (No. M0268 to Q-JW and No. M0952 to T-TG).
Collapse
Key Words
- AI, Artificial intelligence
- AUC, Area Under the Curve
- Artificial intelligence
- CT, Computed Tomography
- DL, Deep learning
- ML, Machine learning
- MRI, Magnetic Resonance Imaging
- Medical imaging
- Meta-analysis
- OC, Ovarian cancer
- Ovarian cancer
- SE, Sensitivity
- SP, Specificity
- US, Ultrasound
- XAI, Explainable artificial intelligence
Collapse
Affiliation(s)
- He-Li Xu
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Ting-Ting Gong
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Fang-Hua Liu
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Hong-Yu Chen
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Qian Xiao
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Yang Hou
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Ying Huang
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| | - Hong-Zan Sun
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Yu Shi
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Song Gao
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Yan Lou
- Department of Intelligent Medicine, China Medical University, China
| | - Qing Chang
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Yu-Hong Zhao
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Qing-Lei Gao
- National Clinical Research Center for Obstetrics and Gynecology, Cancer Biology Research Centre (Key Laboratory of the Ministry of Education) and Department of Gynecology and Obstetrics, Tongji Hospital, Wuhan, China
| | - Qi-Jun Wu
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
- Corresponding author at: Department of Clinical Epidemiology, Department of Obstetrics and Gynecology, Clinical Research Center, Shengjing Hospital of China Medical University, Address: No. 36, San Hao Street, Shenyang, Liaoning 110004, PR China.
| |
Collapse
|
20
|
Ho PS, Hwang YS, Tsai HY. Machine learning framework for automatic image quality evaluation involving a mammographic American College of Radiology phantom. Phys Med 2022; 102:1-8. [PMID: 36030664 DOI: 10.1016/j.ejmp.2022.08.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 07/20/2022] [Accepted: 08/03/2022] [Indexed: 10/15/2022] Open
Abstract
PURPOSE The image quality (IQ) of mammographic images is essential when making a diagnosis, but the quality assurance process for radiological equipment is subjective. We therefore aimed to design an automatic IQ evaluation architecture based on a support vector machine (SVM) dedicated to evaluating images taken of mammography American College of Radiology (ACR) phantom. METHODS A total of 461 phantom images were acquired using mammographic equipment from 10 vendors. Two experienced medical physicists scored the images by consensus. The phantom datasets were randomly divided into training (80%) and testing (20%) sets. Each phantom image (with 6 fibers, 5 specks, and 5 masses) was detected by using bounding boxes, then cropped and divided into 16 pattern images. We identified 159 features for each pattern image. Manual scores were used to assign 3 labels (visible, invisible, and semivisible) to each pattern image. Multiclass-SVM models were trained with 3 types of patterns. Sub-datasets were randomly selected at 10% increments of the total dataset to determine a minimal effective training subset size for the automatic framework. A feature combination test and an analysis of variance were performed to identify the most influential features. RESULTS The accuracy of the model in evaluating fiber, speck, and mass patterns was 90.2%, 98.2%, and 88.9%, respectively. The performance was equivalent when the sample size was at least 138 (30% of 461) phantom images. The most influential feature was the position feature. CONCLUSIONS The proposed SVM-based automatic IQ evaluation framework applied to a mammographic ACR phantom accurately matched manual evaluations.
Collapse
Affiliation(s)
- Pei-Shan Ho
- Department of Engineering and System Science, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu 30013, Taiwan
| | - Yi-Shuan Hwang
- Department of Medical Imaging and Intervention, New Taipei City Municipal TuCheng Hospital, New Taipei City 236, Taiwan; Department of Medical Imaging & Radiological Sciences, Chang Gung University, No. 259 Wen-Hwa 1st Road, Kwei-Shan, Taoyuan 333, Taiwan
| | - Hui-Yu Tsai
- Department of Engineering and System Science, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu 30013, Taiwan; Institute of Nuclear Engineering and Science, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu 30013, Taiwan.
| |
Collapse
|
21
|
Saw SN, Ng KH. Current challenges of implementing artificial intelligence in medical imaging. Phys Med 2022; 100:12-17. [PMID: 35714523 DOI: 10.1016/j.ejmp.2022.06.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 04/26/2022] [Accepted: 06/11/2022] [Indexed: 12/31/2022] Open
Abstract
The idea of using artificial intelligence (AI) in medical practice has gained vast interest due to its potential to revolutionise healthcare systems. However, only some AI algorithms are utilised due to systems' uncertainties, besides the never-ending list of ethical and legal concerns. This paper intends to provide an overview of current AI challenges in medical imaging with an ultimate aim to foster better and effective communication among various stakeholders to encourage AI technology development. We identify four main challenges in implementing AI in medical imaging, supported with consequences and past events when these problems fail to mitigate. Among them is the creation of a robust AI algorithm that is fair, trustable and transparent. Another issue is on data governance, in which best practices in data sharing must be established to promote trust and protect the patients' privacy. Next, stakeholders, such as the government, technology companies and hospital management, should come to a consensus in creating trustworthy AI policies and regulatory frameworks, which is the fourth challenge, to support, encourage and spur innovation in digital AI healthcare technology. Lastly, we discussed the efforts of various organizations such as the World Health Organisation (WHO), American College of Radiology (ACR), European Society of Radiology (ESR) and Radiological Society of North America (RSNA), who are already actively pursuing ethical developments in AI. The efforts by various stakeholders will eventually overcome hurdles and the deployment of AI-driven healthcare applications in clinical practice will become a reality and hence lead to better healthcare services and outcomes.
Collapse
Affiliation(s)
- Shier Nee Saw
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia.
| | - Kwan Hoong Ng
- Department of Biomedical Imaging, Universiti Malaya, 50603 Kuala Lumpur, Malaysia; Department of Medical Imaging and Radiological Sciences, College of Health Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan
| |
Collapse
|
22
|
Intracerebral hemorrhage detection on computed tomography images using a residual neural network. Phys Med 2022; 99:113-119. [PMID: 35671679 DOI: 10.1016/j.ejmp.2022.05.015] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 04/23/2022] [Accepted: 05/26/2022] [Indexed: 01/31/2023] Open
Abstract
Intracerebral hemorrhage (ICH) is a high mortality rate, critical medical injury, produced by the rupture of a blood vessel of the vascular system inside the skull. ICH can lead to paralysis and even death. Therefore, it is considered a clinically dangerous disease that needs to be treated quickly. Thanks to the advancement in machine learning and the computing power of today's microprocessors, deep learning has become an unbelievably valuable tool for detecting diseases, in particular from medical images. In this work, we are interested in differentiating computer tomography (CT) images of healthy brains and ICH using a ResNet-18, a deep residual convolutional neural network. In addition, the gradient-weighted class activation mapping (Grad-CAM) technique was employed to visually explore and understand the network's decisions. The generalizability of the detector was assessed through a 100-iteration Monte Carlo cross-validation (80% of the data for training and 20% for test). In a database with 200 CT images of brains (100 with ICH and 100 without ICH), the detector yielded, on average, 95.93%accuracy, 96.20% specificity, 95.65% sensitivity, 96.40% precision, and 95.91% F1-core, with an average computing time of 165.90 s to train the network (on 160 images) and 1.17 s to test it with 40 CT images. These results are comparable with the state of the art with a simpler and lower computational load approach. Our detector could assist physicians in their medical decision, in resource optimization and in reducing the time and error in the diagnosis of ICH.
Collapse
|
23
|
Inkinen SI, Mäkelä T, Kaasalainen T, Peltonen J, Kangasniemi M, Kortesniemi M. Automatic head computed tomography image noise quantification with deep learning. Phys Med 2022; 99:102-112. [PMID: 35671678 DOI: 10.1016/j.ejmp.2022.05.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 04/02/2022] [Accepted: 05/25/2022] [Indexed: 10/18/2022] Open
Abstract
PURPOSE Computed tomography (CT) image noise is usually determined by standard deviation (SD) of pixel values from uniform image regions. This study investigates how deep learning (DL) could be applied in head CT image noise estimation. METHODS Two approaches were investigated for noise image estimation of a single acquisition image: direct noise image estimation using supervised DnCNN convolutional neural network (CNN) architecture, and subtraction of a denoised image estimated with denoising UNet-CNN experimented with supervised and unsupervised noise2noise training approaches. Noise was assessed with local SD maps using 3D- and 2D-CNN architectures. Anthropomorphic phantom CT image dataset (N = 9 scans, 3 repetitions) was used for DL-model comparisons. Mean square error (MSE) and mean absolute percentage errors (MAPE) of SD values were determined using the SD values of subtraction images as ground truth. Open-source clinical head CT low-dose dataset (Ntrain = 37, Ntest = 10 subjects) were used to demonstrate DL applicability in noise estimation from manually labeled uniform regions and in automated noise and contrast assessment. RESULTS The direct SD estimation using 3D-CNN was the most accurate assessment method when comparing in phantom dataset (MAPE = 15.5%, MSE = 6.3HU). Unsupervised noise2noise approach provided only slightly inferior results (MAPE = 20.2%, MSE = 13.7HU). 2DCNN and unsupervised UNet models provided the smallest MSE on clinical labeled uniform regions. CONCLUSIONS DL-based clinical image assessment is feasible and provides acceptable accuracy as compared to true image noise. Noise2noise approach may be feasible in clinical use where no ground truth data is available. Noise estimation combined with tissue segmentation may enable more comprehensive image quality characterization.
Collapse
Affiliation(s)
- Satu I Inkinen
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland.
| | - Teemu Mäkelä
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland; Department of Physics, University of Helsinki, P.O. Box 64, FI-00014 Helsinki, Finland
| | - Touko Kaasalainen
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Juha Peltonen
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Marko Kangasniemi
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Mika Kortesniemi
- HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Haartmaninkatu 4, 00290 Helsinki, Finland
| |
Collapse
|
24
|
Validation of deep learning-based nonspecific estimates for amyloid burden quantification with longitudinal data. Phys Med 2022; 99:85-93. [PMID: 35665624 DOI: 10.1016/j.ejmp.2022.05.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 05/26/2022] [Accepted: 05/27/2022] [Indexed: 11/20/2022] Open
Abstract
PURPOSE To validate our previously proposed method of quantifying amyloid-beta (Aβ) load using nonspecific (NS) estimates generated with convolutional neural networks (CNNs) using [18F]Florbetapir scans from longitudinal and multicenter ADNI data. METHODS 188 paired MR (T1-weighted and T2-weighted) and PET images were downloaded from the ADNI3 dataset, of which 49 subjects had 2 time-point scans. 40 Aβ- subjects with low specific uptake were selected for training. Multimodal ScaleNet (SN) and monomodal HighRes3DNet (HRN), using either T1-weighted or T2-weighted MR images as inputs) were trained to map structural MR to NS-PET images. The optimized SN and HRN networks were used to estimate the NS for all scans and then subtracted from SUVr images to determine the specific amyloid load (SAβL) images. The association of SAβL with various cognitive and functional test scores was evaluated using Spearman analysis, as well as the differences in SAβL with cognitive test scores for 49 subjects with 2 time-point scans and sensitivity analysis. RESULTS SAβL derived from both SN and HRN showed higher association with memory-related cognitive test scores compared to SUVr. However, for longitudinal scans, only SAβL estimated from multimodal SN consistently performed better than SUVr for all memory-related cognitive test scores. CONCLUSIONS Our proposed method of quantifying Aβ load using NS estimated from CNN correlated better than SUVr with cognitive decline for both static and longitudinal data, and was able to estimate NS of [18F]Florbetapir. We suggest employing multimodal networks with both T1-weighted and T2-weighted MR images for better NS estimation.
Collapse
|
25
|
Outcome Prediction for SARS-CoV-2 Patients Using Machine Learning Modeling of Clinical, Radiological, and Radiomic Features Derived from Chest CT Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094493] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
(1) Background: Chest Computed Tomography (CT) has been proposed as a non-invasive method for confirming the diagnosis of SARS-CoV-2 patients using radiomic features (RFs) and baseline clinical data. The performance of Machine Learning (ML) methods using RFs derived from semi-automatically segmented lungs in chest CT images was investigated regarding the ability to predict the mortality of SARS-CoV-2 patients. (2) Methods: A total of 179 RFs extracted from 436 chest CT images of SARS-CoV-2 patients, and 8 clinical and 6 radiological variables, were used to train and evaluate three ML methods (Least Absolute Shrinkage and Selection Operator [LASSO] regularized regression, Random Forest Classifier [RFC], and the Fully connected Neural Network [FcNN]) for their ability to predict mortality using the Area Under the Curve (AUC) of Receiver Operator characteristic (ROC) Curves. These three groups of variables were used separately and together as input for constructing and comparing the final performance of ML models. (3) Results: All the ML models using only RFs achieved an informative level regarding predictive ability, outperforming radiological assessment, without however reaching the performance obtained with ML based on clinical variables. The LASSO regularized regression and the FcNN performed equally, both being superior to the RFC. (4) Conclusions: Radiomic features based on semi-automatically segmented CT images and ML approaches can aid in identifying patients with a high risk of mortality, allowing a fast, objective, and generalizable method for improving prognostic assessment by providing a second expert opinion that outperforms human evaluation.
Collapse
|
26
|
Recent Applications of Artificial Intelligence in Radiotherapy: Where We Are and Beyond. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073223] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
In recent decades, artificial intelligence (AI) tools have been applied in many medical fields, opening the possibility of finding novel solutions for managing very complex and multifactorial problems, such as those commonly encountered in radiotherapy (RT). We conducted a PubMed and Scopus search to identify the AI application field in RT limited to the last four years. In total, 1824 original papers were identified, and 921 were analyzed by considering the phase of the RT workflow according to the applied AI approaches. AI permits the processing of large quantities of information, data, and images stored in RT oncology information systems, a process that is not manageable for individuals or groups. AI allows the iterative application of complex tasks in large datasets (e.g., delineating normal tissues or finding optimal planning solutions) and might support the entire community working in the various sectors of RT, as summarized in this overview. AI-based tools are now on the roadmap for RT and have been applied to the entire workflow, mainly for segmentation, the generation of synthetic images, and outcome prediction. Several concerns were raised, including the need for harmonization while overcoming ethical, legal, and skill barriers.
Collapse
|
27
|
Wu Y, Zhang L, Guo S, Zhang L, Gao F, Jia M, Zhou Z. Enhanced phase retrieval via deep concatenation networks for in-line X-ray phase contrast imaging. Phys Med 2022; 95:41-49. [DOI: 10.1016/j.ejmp.2021.12.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 11/19/2021] [Accepted: 12/28/2021] [Indexed: 12/18/2022] Open
|
28
|
Retico A, Avanzo M, Boccali T, Bonacorsi D, Botta F, Cuttone G, Martelli B, Salomoni D, Spiga D, Trianni A, Stasi M, Iori M, Talamonti C. Enhancing the impact of Artificial Intelligence in Medicine: A joint AIFM-INFN Italian initiative for a dedicated cloud-based computing infrastructure. Phys Med 2021; 91:140-150. [PMID: 34801873 DOI: 10.1016/j.ejmp.2021.10.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 10/04/2021] [Accepted: 10/05/2021] [Indexed: 12/23/2022] Open
Abstract
Artificial Intelligence (AI) techniques have been implemented in the field of Medical Imaging for more than forty years. Medical Physicists, Clinicians and Computer Scientists have been collaborating since the beginning to realize software solutions to enhance the informative content of medical images, including AI-based support systems for image interpretation. Despite the recent massive progress in this field due to the current emphasis on Radiomics, Machine Learning and Deep Learning, there are still some barriers to overcome before these tools are fully integrated into the clinical workflows to finally enable a precision medicine approach to patients' care. Nowadays, as Medical Imaging has entered the Big Data era, innovative solutions to efficiently deal with huge amounts of data and to exploit large and distributed computing resources are urgently needed. In the framework of a collaboration agreement between the Italian Association of Medical Physicists (AIFM) and the National Institute for Nuclear Physics (INFN), we propose a model of an intensive computing infrastructure, especially suited for training AI models, equipped with secure storage systems, compliant with data protection regulation, which will accelerate the development and extensive validation of AI-based solutions in the Medical Imaging field of research. This solution can be developed and made operational by Physicists and Computer Scientists working on complementary fields of research in Physics, such as High Energy Physics and Medical Physics, who have all the necessary skills to tailor the AI-technology to the needs of the Medical Imaging community and to shorten the pathway towards the clinical applicability of AI-based decision support systems.
Collapse
Affiliation(s)
- Alessandra Retico
- National Institute for Nuclear Physics (INFN), Pisa Division, 56127 Pisa, Italy
| | - Michele Avanzo
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy
| | - Tommaso Boccali
- National Institute for Nuclear Physics (INFN), Pisa Division, 56127 Pisa, Italy
| | - Daniele Bonacorsi
- University of Bologna, 40126 Bologna, Italy; INFN, Bologna Division, 40126 Bologna, Italy
| | - Francesca Botta
- Medical Physics Unit, Istituto Europeo di oncologia IRCCS, 20141 Milan, Italy
| | - Giacomo Cuttone
- INFN, Southern National Laboratory (LNS), 95123 Catania, Italy
| | | | | | | | - Annalisa Trianni
- Medical Physics Unit, Ospedale Santa Chiara APSS, 38122 Trento, Italy
| | - Michele Stasi
- Medical Physics Unit, A.O. Ordine Mauriziano di Torino, 10128 Torino, Italy
| | - Mauro Iori
- Medical Physics Unit, Azienda USL-IRCCS di Reggio Emilia, 42122 Reggio Emilia, Italy.
| | - Cinzia Talamonti
- Department Biomedical Experimental and Clinical Science "Mario Serio", University of Florence, 50134 Florence, Italy; INFN, Florence Division, 50134 Florence, Italy
| |
Collapse
|
29
|
Ubaldi L, Valenti V, Borgese RF, Collura G, Fantacci ME, Ferrera G, Iacoviello G, Abbate BF, Laruina F, Tripoli A, Retico A, Marrale M. Strategies to develop radiomics and machine learning models for lung cancer stage and histology prediction using small data samples. Phys Med 2021; 90:13-22. [PMID: 34521016 DOI: 10.1016/j.ejmp.2021.08.015] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/03/2021] [Revised: 08/21/2021] [Accepted: 08/28/2021] [Indexed: 02/09/2023] Open
Abstract
Predictive models based on radiomics and machine-learning (ML) need large and annotated datasets for training, often difficult to collect. We designed an operative pipeline for model training to exploit data already available to the scientific community. The aim of this work was to explore the capability of radiomic features in predicting tumor histology and stage in patients with non-small cell lung cancer (NSCLC). We analyzed the radiotherapy planning thoracic CT scans of a proprietary sample of 47 subjects (L-RT) and integrated this dataset with a publicly available set of 130 patients from the MAASTRO NSCLC collection (Lung1). We implemented intra- and inter-sample cross-validation strategies (CV) for evaluating the ML predictive model performances with not so large datasets. We carried out two classification tasks: histology classification (3 classes) and overall stage classification (two classes: stage I and II). In the first task, the best performance was obtained by a Random Forest classifier, once the analysis has been restricted to stage I and II tumors of the Lung1 and L-RT merged dataset (AUC = 0.72 ± 0.11). For the overall stage classification, the best results were obtained when training on Lung1 and testing of L-RT dataset (AUC = 0.72 ± 0.04 for Random Forest and AUC = 0.84 ± 0.03 for linear-kernel Support Vector Machine). According to the classification task to be accomplished and to the heterogeneity of the available dataset(s), different CV strategies have to be explored and compared to make a robust assessment of the potential of a predictive model based on radiomics and ML.
Collapse
Affiliation(s)
- L Ubaldi
- Physics Department, University of Pisa, Pisa, Italy; National Institute for Nuclear Physics (INFN), Pisa Division, Pisa, Italy
| | - V Valenti
- REM Radiation Therapy Center, Viagrande (CT), I-95029 Catania, Italy
| | - R F Borgese
- Physics and Chemistry Department "Emilio Segrè", University of Palermo, Palermo, Italy; National Institute for Nuclear Physics (INFN), Catania Division, Catania, Italy
| | - G Collura
- Physics and Chemistry Department "Emilio Segrè", University of Palermo, Palermo, Italy; National Institute for Nuclear Physics (INFN), Catania Division, Catania, Italy
| | - M E Fantacci
- Physics Department, University of Pisa, Pisa, Italy; National Institute for Nuclear Physics (INFN), Pisa Division, Pisa, Italy
| | - G Ferrera
- Radiation Oncology, ARNAS-Civico Hospital, Palermo, Italy
| | - G Iacoviello
- Medical Physics Department, ARNAS-Civico Hospital, Palermo, Italy
| | - B F Abbate
- Medical Physics Department, ARNAS-Civico Hospital, Palermo, Italy
| | - F Laruina
- Physics Department, University of Pisa, Pisa, Italy; National Institute for Nuclear Physics (INFN), Pisa Division, Pisa, Italy
| | - A Tripoli
- REM Radiation Therapy Center, Viagrande (CT), I-95029 Catania, Italy
| | - A Retico
- National Institute for Nuclear Physics (INFN), Pisa Division, Pisa, Italy
| | - M Marrale
- Physics and Chemistry Department "Emilio Segrè", University of Palermo, Palermo, Italy; National Institute for Nuclear Physics (INFN), Catania Division, Catania, Italy
| |
Collapse
|
30
|
Zanca F, Avanzo M, Colgan N, Crijns W, Guidi G, Hernandez-Giron I, Kagadis GC, Diaz O, Zaidi H, Russo P, Toma-Dasu I, Kortesniemi M. Focus issue: Artificial intelligence in medical physics. Phys Med 2021; 83:287-291. [PMID: 34004585 DOI: 10.1016/j.ejmp.2021.05.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Affiliation(s)
- F Zanca
- Palindromo Consulting, Leuven, Belgium
| | - M Avanzo
- Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, Department of Medical Physics, 33081 Aviano, PN, Italy
| | - N Colgan
- School of Physics, National University of Ireland Galway, Galway, Ireland
| | - W Crijns
- Department Oncology, Laboratory of Experimental Radiotherapy, KU Leuven and Department of Radiation Oncology, UZ Leuven, Belgium
| | - G Guidi
- Medical Physics, Az. Ospedaliero-Universitaria di Modena, Modena, Italy
| | - I Hernandez-Giron
- Leiden University Medical Center (LUMC), Radiology Department, Division of Image Processing, Albinusdreef 2, 2333ZA Leiden, The Netherlands
| | - G C Kagadis
- 3DMI Research Group, Department of Medical Physics, School of Medicine, University of Patras, GR 265 04, Greece
| | - O Diaz
- Faculty of Mathematics and Computer Science, University of Barcelona, Barcelona, Spain
| | - H Zaidi
- Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, CH-1211 Geneva, Switzerland
| | - P Russo
- Università di Napoli Federico II, Dipartimento di Fisica "Ettore Pancini", I-80126 Naples, Italy
| | - I Toma-Dasu
- Department of Physics, Medical Radiation Physics, Stockholm University, Stockholm, Sweden; Department of Oncology and Pathology, Medical Radiation Physics, Karolinska Institutet, Stockholm, Sweden
| | - M Kortesniemi
- HUS Medical Imaging Center, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| |
Collapse
|
31
|
Maffei N, Manco L, Aluisio G, D'Angelo E, Ferrazza P, Vanoni V, Meduri B, Lohr F, Guidi G. Radiomics classifier to quantify automatic segmentation quality of cardiac sub-structures for radiotherapy treatment planning. Phys Med 2021; 83:278-286. [DOI: 10.1016/j.ejmp.2021.05.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 04/30/2021] [Accepted: 05/03/2021] [Indexed: 12/24/2022] Open
|