1
|
Baker CR, Pease M, Sexton DP, Abumoussa A, Chambless LB. Artificial intelligence innovations in neurosurgical oncology: a narrative review. J Neurooncol 2024:10.1007/s11060-024-04757-5. [PMID: 38958849 DOI: 10.1007/s11060-024-04757-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 06/24/2024] [Indexed: 07/04/2024]
Abstract
PURPOSE Artificial Intelligence (AI) has become increasingly integrated clinically within neurosurgical oncology. This report reviews the cutting-edge technologies impacting tumor treatment and outcomes. METHODS A rigorous literature search was performed with the aid of a research librarian to identify key articles referencing AI and related topics (machine learning (ML), computer vision (CV), augmented reality (AR), virtual reality (VR), etc.) for neurosurgical care of brain or spinal tumors. RESULTS Treatment of central nervous system (CNS) tumors is being improved through advances across AI-such as AL, CV, and AR/VR. AI aided diagnostic and prognostication tools can influence pre-operative patient experience, while automated tumor segmentation and total resection predictions aid surgical planning. Novel intra-operative tools can rapidly provide histopathologic tumor classification to streamline treatment strategies. Post-operative video analysis, paired with rich surgical simulations, can enhance training feedback and regimens. CONCLUSION While limited generalizability, bias, and patient data security are current concerns, the advent of federated learning, along with growing data consortiums, provides an avenue for increasingly safe, powerful, and effective AI platforms in the future.
Collapse
Affiliation(s)
- Clayton R Baker
- Vanderbilt University School of Medicine, Nashville, TN, USA.
| | - Matthew Pease
- Department of Neurosurgery, Indiana University, Indianapolis, IN, USA
| | - Daniel P Sexton
- Department of Neurosurgery, Duke University, Durham, NC, USA
| | - Andrew Abumoussa
- Department of Neurosurgery, University of North Carolina at Chapel Hill Hospitals, Chapel Hill, NC, USA
| | - Lola B Chambless
- Department of Neurosurgery, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
2
|
Moawad AW, Janas A, Baid U, Ramakrishnan D, Saluja R, Ashraf N, Jekel L, Amiruddin R, Adewole M, Albrecht J, Anazodo U, Aneja S, Anwar SM, Bergquist T, Calabrese E, Chiang V, Chung V, Conte GMM, Dako F, Eddy J, Ezhov I, Familiar A, Farahani K, Iglesias JE, Jiang Z, Johanson E, Kazerooni AF, Kofler F, Krantchev K, LaBella D, Van Leemput K, Li HB, Linguraru MG, Link KE, Liu X, Maleki N, Meier Z, Menze BH, Moy H, Osenberg K, Piraud M, Reitman Z, Shinohara RT, Tahon NH, Nada A, Velichko YS, Wang C, Wiestler B, Wiggins W, Shafique U, Willms K, Avesta A, Bousabarah K, Chakrabarty S, Gennaro N, Holler W, Kaur M, LaMontagne P, Lin M, Lost J, Marcus DS, Maresca R, Merkaj S, Nada A, Pedersen GC, von Reppert M, Sotiras A, Teytelboym O, Tillmans N, Westerhoff M, Youssef A, Godfrey D, Floyd S, Rauschecker A, Villanueva-Meyer J, Pflüger I, Cho J, Bendszus M, Brugnara G, Cramer J, Perez-Carillo GJG, Johnson DR, Kam A, Kwan BYM, Lai L, Lall NU, Memon F, Patro SN, Petrovic B, So TY, Thompson G, Wu L, Schrickel EB, Bansal A, Barkhof F, Besada C, Chu S, Druzgal J, Dusoi A, Farage L, Feltrin F, Fong A, Fung SH, Gray RI, Ikuta I, Iv M, Postma AA, Mahajan A, Joyner D, Krumpelman C, Letourneau-Guillon L, Lincoln CM, Maros ME, Miller E, Morón F, Nimchinsky EA, Ozsarlak O, Patel U, Rohatgi S, Saha A, Sayah A, Schwartz ED, Shih R, Shiroishi MS, Small JE, Tanwar M, Valerie J, Weinberg BD, White ML, Young R, Zohrabian VM, Azizova A, Brüßeler MMT, Fehringer P, Ghonim M, Ghonim M, Gkampenis A, Okar A, Pasquini L, Sharifi Y, Singh G, Sollmann N, Soumala T, Taherzadeh M, Yordanov N, Vollmuth P, Foltyn-Dumitru M, Malhotra A, Abayazeed AH, Dellepiane F, Lohmann P, Pérez-García VM, Elhalawani H, Al-Rubaiey S, Armindo RD, Ashraf K, Asla MM, Badawy M, Bisschop J, Lomer NB, Bukatz J, Chen J, Cimflova P, Corr F, Crawley A, Deptula L, Elakhdar T, Shawali IH, Faghani S, Frick A, Gulati V, Haider MA, Hierro F, Dahl RH, Jacobs SM, Hsieh KCJ, Kandemirli SG, Kersting K, Kida L, Kollia S, Koukoulithras I, Li X, Abouelatta A, Mansour A, Maria-Zamfirescu RC, Marsiglia M, Mateo-Camacho YS, McArthur M, McDonnell O, McHugh M, Moassefi M, Morsi SM, Muntenu A, Nandolia KK, Naqvi SR, Nikanpour Y, Alnoury M, Nouh AMA, Pappafava F, Patel MD, Petrucci S, Rawie E, Raymond S, Roohani B, Sabouhi S, Sanchez-Garcia LM, Shaked Z, Suthar PP, Altes T, Isufi E, Dhermesh Y, Gass J, Thacker J, Tarabishy AR, Turner B, Vacca S, Vilanilam GK, Warren D, Weiss D, Willms K, Worede F, Yousry S, Lerebo W, Aristizabal A, Karargyris A, Kassem H, Pati S, Sheller M, Bakas S, Rudie JD, Aboian M. The Brain Tumor Segmentation - Metastases (BraTS-METS) Challenge 2023: Brain Metastasis Segmentation on Pre-treatment MRI. ARXIV 2024:arXiv:2306.00838v2. [PMID: 37396600 PMCID: PMC10312806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
The translation of AI-generated brain metastases (BM) segmentation into clinical practice relies heavily on diverse, high-quality annotated medical imaging datasets. The BraTS-METS 2023 challenge has gained momentum for testing and benchmarking algorithms using rigorously annotated internationally compiled real-world datasets. This study presents the results of the segmentation challenge and characterizes the challenging cases that impacted the performance of the winning algorithms. Untreated brain metastases on standard anatomic MRI sequences (T1, T2, FLAIR, T1PG) from eight contributed international datasets were annotated in stepwise method: published UNET algorithms, student, neuroradiologist, final approver neuroradiologist. Segmentations were ranked based on lesion-wise Dice and Hausdorff distance (HD95) scores. False positives (FP) and false negatives (FN) were rigorously penalized, receiving a score of 0 for Dice and a fixed penalty of 374 for HD95. The mean scores for the teams were calculated. Eight datasets comprising 1303 studies were annotated, with 402 studies (3076 lesions) released on Synapse as publicly available datasets to challenge competitors. Additionally, 31 studies (139 lesions) were held out for validation, and 59 studies (218 lesions) were used for testing. Segmentation accuracy was measured as rank across subjects, with the winning team achieving a LesionWise mean score of 7.9. The Dice score for the winning team was 0.65 ± 0.25. Common errors among the leading teams included false negatives for small lesions and misregistration of masks in space. The Dice scores and lesion detection rates of all algorithms diminished with decreasing tumor size, particularly for tumors smaller than 100 mm3. In conclusion, algorithms for BM segmentation require further refinement to balance high sensitivity in lesion detection with the minimization of false positives and negatives. The BraTS-METS 2023 challenge successfully curated well-annotated, diverse datasets and identified common errors, facilitating the translation of BM segmentation across varied clinical environments and providing personalized volumetric reports to patients undergoing BM treatment.
Collapse
Affiliation(s)
| | - Anastasia Janas
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Ujjwal Baid
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, School of Medicine, Indiana University, Indianapolis, IN, USA
| | - Divya Ramakrishnan
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Rachit Saluja
- Department of Electical and Computer Engineering, Cornell University and Cornell Tech, New York, NY, USA
- Department of Radiology, Weill Cornell Medicine, New York, NY, USA
| | - Nader Ashraf
- ImagineQuant, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
| | - Leon Jekel
- DKFZ Division of Translational Neurooncology at the WTZ, German Cancer Consortium, DKTK Partner Site, University Hospital Essen, Essen, Germany
| | - Raisa Amiruddin
- ImagineQuant, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Maruf Adewole
- Medical Artificial Intelligence Lab, Crestview Radiology, Lagos, Nigeria
| | | | - Udunna Anazodo
- Montreal Neurological Institute, McGill University, Montreal, Canada
- Medical Artificial Intelligence (MAI) lab, Crestview Radiology, Lagos, Nigeria
| | - Sanjay Aneja
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT, USA
| | - Syed Muhammad Anwar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Hospital, Washington, D.C., USA
| | | | - Evan Calabrese
- Department of Radiology, Duke University Medical Center, Durham, NC, USA
| | - Veronica Chiang
- Department of Neurosurgery, Yale School of Medicine, New Haven, CT, USA
| | | | | | - Farouk Dako
- Center for Global Health, Perelman School of Medicine, University of Pennsylvania, PA, USA
| | | | - Ivan Ezhov
- Department of Informatics, Technical University Munich, Germany
| | - Ariana Familiar
- Children’s Hospital of Philadelphia, University of Pennsylvania, Philadelphia, PA, USA
| | - Keyvan Farahani
- Cancer Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Juan Eugenio Iglesias
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
| | - Zhifan Jiang
- Children’s National Hospital, Washington, D.C., USA
| | - Elaine Johanson
- PrecisionFDA, U.S. Food and Drug Administration, Silver Spring, MD, USA
| | - Anahita Fathi Kazerooni
- Department of Neurosurgery, University of Pennsylvania, Philadelphia, PA, USA
- Division of Neurosurgery, The Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Center for Data-Driven Discovery in Biomedicine, The Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | | | - Kiril Krantchev
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Dominic LaBella
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Koen Van Leemput
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark
| | - Hongwei Bran Li
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children’s National Hospital, Washington, D.C., USA
- Departments of Radiology and Pediatrics, George Washington University School of Medicine and Health Sciences, Washington, D.C., USA
| | | | - Xinyang Liu
- Children’s National Hospital, Washington, D.C., USA
| | - Nazanin Maleki
- ImagineQuant, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | | | - Bjoern H Menze
- Biomedical Image Analysis & Machine Learning, Department of Quantitative Biomedicine, University of Zurich, Switzerland
| | - Harrison Moy
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Klara Osenberg
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | | | - Russel Takeshi Shinohara
- Center for Clinical Epidemiology and Biostatistics, University of Pennsylvania, Philadelphia, PA, USA
| | | | | | - Yuri S. Velichko
- Northwestern University, Department of Radiology, Feinberg School of Medicine, Chicago, IL, USA
| | - Chunhao Wang
- Duke University School of Medicine, Durham, NC, USA
| | - Benedikt Wiestler
- Department of Neuroradiology, Technical University of Munich, Munich, Germany
| | | | - Umber Shafique
- Department of Radiology and Imaging Sciences, Indiana University, Indianapolis, IN, USA
| | - Klara Willms
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Arman Avesta
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | - Satrajit Chakrabarty
- Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, MO, USA
- GE HealthCare, San Ramon, CA, USA
| | - Nicolo Gennaro
- Northwestern University, Department of Radiology, Feinberg School of Medicine, Chicago, IL, USA
| | | | - Manpreet Kaur
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Pamela LaMontagne
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | | | - Jan Lost
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Daniel S. Marcus
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | - Ryan Maresca
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT, USA
| | - Sarah Merkaj
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | | | - Marc von Reppert
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Aristeidis Sotiras
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
- Institute for Informatics, Data Science & Biostatistics, Washington University School of Medicine, St. Louis, MO, USA
| | | | - Niklas Tillmans
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | | | | | - Scott Floyd
- Duke University Medical Center, Durham, NC, USA
| | - Andreas Rauschecker
- Department of Radiology and Biomedical Imaging, University of California San Francisco, CA, USA
| | - Javier Villanueva-Meyer
- Department of Radiology and Biomedical Imaging, University of California San Francisco, CA, USA
| | - Irada Pflüger
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Jaeyoung Cho
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Martin Bendszus
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Gianluca Brugnara
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Justin Cramer
- Department of Radiology, Mayo Clinic, Phoenix, AZ, USA
| | | | | | - Anthony Kam
- Loyola University Medical Center, Hines, IL, USA
| | | | - Lillian Lai
- Department of Radiology, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | | | - Fatima Memon
- Carolina Radiology Associates, Myrtle Beach, SC, USA
- McLeod Regional Medical Center, Florence, SC, USA
- Medical University of South Carolina, Charleston, SC, USA
| | | | | | - Tiffany Y. So
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong SAR
| | - Gerard Thompson
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
- Department of Clinical Neurosciences, NHS Lothian, Edinburgh, United Kingdom
| | - Lei Wu
- Department of Radiology, University of Washington, Seattle, WA, USA
| | - E. Brooke Schrickel
- Department of Radiology, Ohio State University College of medicine, Columbus, OH, USA
| | - Anu Bansal
- Albert Einstein Medical Center, Hartford, CT, USA
| | - Frederik Barkhof
- Amsterdam UMC, location Vrije Universiteir, the Netherlands
- University College London, United Kingdom
| | | | - Sammy Chu
- Department of Radiology, University of Washington, Seattle, WA, USA
| | - Jason Druzgal
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, Virginia, USA
| | | | - Luciano Farage
- Centro Universitario Euro-Americana (UNIEURO), Brasília, DF, Brazil
| | - Fabricio Feltrin
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Amy Fong
- Southern District Health Board, Dunedin, New Zealand
| | - Steve H. Fung
- Department of Radiology, Houston Methodist, Houston, TX, USA
| | - R. Ian Gray
- University of Tennessee medical Center, Knoxville, TN, USA
| | - Ichiro Ikuta
- Mayo Clinic, Department of Radiology, Section of Neuroradiology, Phoenix, AZ, USA
| | - Michael Iv
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Alida A. Postma
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands
- Mental Health and Neuroscience research institute, Maastricht University, Maastricht, the Netherlands
| | - Amit Mahajan
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - David Joyner
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, VA, USA
| | - Chase Krumpelman
- Department of Radiology, University of Northwestern, Chicago, IL, USA
| | | | | | - Mate E. Maros
- Departments of Neuroradiology & Biomedical Informatics, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Elka Miller
- Department of Diagnostic and Interventional Radiology, SickKids Hospital, University of Toronto, Canada
| | - Fanny Morón
- Department of Radiology, Baylor College of medicine, Houston, TX, USA
| | | | - Ozkan Ozsarlak
- Department of Radiology, AZ Monica, Antwerp Area, Belgium
| | - Uresh Patel
- Medicolegal Imaging Experts LLC, Mercer Island, WA, USA
| | - Saurabh Rohatgi
- Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Atin Saha
- Memorial Sloan Kettering Cancer Center, New York, NY, USA
- Weill Cornell Medical College, New York, NY, USA
| | - Anousheh Sayah
- MedStar Georgetown University Hospital, Washington, D.C., USA
| | - Eric D. Schwartz
- Department of Radiology, St.Elizabeth’s Medical Center, Boston, MA, USA
- Department of Radiology, Tufts University School of Medicine, Boston, MA, USA
| | - Robert Shih
- Walter Reed National Military Medical Center, Bethesda, MD, USA
| | | | | | | | - Jewels Valerie
- Department of Radiology, University of North Carolina School of Medicine, Chapel Hill, NC, USA
| | - Brent D. Weinberg
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | | | - Robert Young
- George Washington University, Washington, D.C., USA
| | - Vahe M. Zohrabian
- Northwell Health, Zucker Hofstra School of Medicine at Northwell, North Shore University Hospital, Hempstead, New York, NY, USA
| | - Aynur Azizova
- Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| | | | - Pascal Fehringer
- Faculty of Medicine, Jena University Hospital, Friedrich Schiller University Jena, Jena, Germany
| | - Mohanad Ghonim
- Department of Radiology, Ain Shams University, Cairo, Egypt
| | - Mohamed Ghonim
- Department of Radiology, Ain Shams University, Cairo - Egypt
| | | | | | - Luca Pasquini
- Radiology Department, Memorial Sloan Kettering Cancer Center, New York City, NY, USA
| | | | - Gagandeep Singh
- Columbia University Irving Medical Center, New York, NY, USA
| | - Nico Sollmann
- Department of Diagnostic and Interventional Radiology, University Hospital Ulm, Ulm, Germany
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- TUM-Neuroimaging Center, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | | | | | - Nikolay Yordanov
- Faculty of Medicine, Medical University - Sofia, Sofia, Bulgaria
| | - Philipp Vollmuth
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
- Department of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Ajay Malhotra
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | - Francesco Dellepiane
- Functional and Interventional Neuroradiology Unit, Bambino Gesù Children’s Hospital, Rome, Italy
| | - Philipp Lohmann
- Institute of Neuroscience and Medicine (INM-4), Research Center Juelich, Juelich, Germany
- Department of Nuclear Medicine, University Hospital RWTH Aachen, Aachen, Germany
| | - Víctor M. Pérez-García
- Mathematical Oncology Laboratory & Department of Mathematics, University of Castilla-La Mancha, Spain
| | - Hesham Elhalawani
- Department of Radiation Oncology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Sanaria Al-Rubaiey
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | - Rui Duarte Armindo
- Department of Neuroradiology, Western Lisbon Hospital Centre (CHLO), Portugal
| | | | | | - Mohamed Badawy
- Diagnostic Radiology Department, Wayne State University, Detroit, MI
| | - Jeroen Bisschop
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | | | - Jan Bukatz
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | - Jim Chen
- Department of Radiology/Division of Neuroradiology, San Diego Veterans Administration Medical Center/UC San Diego Health System, San Diego, CA, USA
| | - Petra Cimflova
- Department of Radiology, University of Calgary, Calgary, Canada
| | - Felix Corr
- EDU Institute of Higher Education, Villa Bighi, Chaplain’s House, Kalkara, Malta
| | | | - Lisa Deptula
- Ross University School of Medicine, Bridgetown, Barbados
| | | | | | | | - Alexandra Frick
- Department of Neurosurgery, Vivantes Klinikum Neukölln, Berlin, Germany
| | | | | | - Fátima Hierro
- Neuroradiology Department, Pedro Hispano Hospital, Matosinhos, Portugal
| | - Rasmus Holmboe Dahl
- Department of Radiology, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
| | - Sarah Maria Jacobs
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | - Sedat G. Kandemirli
- Department of Radiology, University of Iowa Hospital and Clinics, Iowa City, IA, USA
| | - Katharina Kersting
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | - Laura Kida
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | - Sofia Kollia
- National and Kapodistrian University of Athens, School of Medicine, Athens, Greece
| | | | - Xiao Li
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Ahmed Abouelatta
- Department of Diagnostic and Interventional Radiology, Cairo University, Cairo, Egypt
| | | | - Ruxandra-Catrinel Maria-Zamfirescu
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | - Marcela Marsiglia
- Department of Radiology, Brigham and Women’s Hospital, Massachusetts General Hospital, Boston, MA, USA
| | | | - Mark McArthur
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, USA
| | | | - Maire McHugh
- Department of Radiology Manchester NHS Foundation Trust, North West School of Radiology, Manchester, United Kingdom
| | - Mana Moassefi
- Artificial Intelligence Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | | | | | - Khanak K. Nandolia
- Department of Radiodiagnosis, All India Institute of Medical Sciences Rishikesh, India
| | - Syed Raza Naqvi
- Windsor Regional Hospital, Western University, Ontario, Canada
| | - Yalda Nikanpour
- Artificial Intelligence & Informatics, Mayo Clinic, Rochester, MN, USA
| | - Mostafa Alnoury
- Department of Radiology, University of Pennsylvania, PA, USA
| | | | - Francesca Pappafava
- Department of Medicine and Surgery, Università degli Studi di Perugia, Italy
| | - Markand D. Patel
- Department of Neuroradiology, Imperial College Healthcare NHS Trust, London, United Kingdom
| | - Samantha Petrucci
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA, USA
| | - Eric Rawie
- Department of Radiology, Michigan Medicine, Ann Arbor, MI, USA
| | - Scott Raymond
- Department of Radiology, University of Vermont Medical Center, Burlington, VT, USA
| | - Borna Roohani
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Sadeq Sabouhi
- Isfahan University of Medical Sciences, Isfahan, Iran
| | | | - Zoe Shaked
- Charité-Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health), Berlin, Germany
| | | | - Talissa Altes
- Radiology Department, University of Missouri, Columbia, MO, USA
| | | | | | | | | | - Abdul Rahman Tarabishy
- Department of NeuroRadiology, Rockefeller Neuroscience Institute, West Virginia University. Morgantown, WV, USA
| | | | - Sebastiano Vacca
- University of Cagliari, School of Medicine and Surgery, Cagliari, Italy
| | - George K. Vilanilam
- Department of Radiology, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Daniel Warren
- Washington University School of Medicine in St. Louis, St. Louis, MO, USA
| | - David Weiss
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Klara Willms
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Fikadu Worede
- Department of Radiology, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | | | - Wondwossen Lerebo
- Department of Radiology, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | | | | | | | - Sarthak Pati
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, School of Medicine, Indiana University, Indianapolis, IN, USA
- Center For Federated Learning in Medicine, Indiana University, Indianapolis, IN, USA
- Medical Working Group, MLCommons, San Fransisco, CA, USA
| | | | - Spyridon Bakas
- Division of Computational Pathology, Department of Pathology and Laboratory Medicine, School of Medicine, Indiana University, Indianapolis, IN, USA
- Department of Radiology and Imaging Sciences, School of Medicine, Indiana University, Indianapolis, IN, USA
- Department of Neurological Surgery, School of Medicine, Indiana University, Indianapolis, IN, USA
| | - Jeffrey D. Rudie
- Department of Radiology, University of California San Diego, CA, USA
- Department of Radiology, Scripps Clinic Medical Group, CA, USA
| | - Mariam Aboian
- Department of Radiology, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| |
Collapse
|
3
|
Liang YW, Fang YT, Lin TC, Yang CR, Chang CC, Chang HK, Ko CC, Tu TH, Fay LY, Wu JC, Huang WC, Hu HW, Chen YY, Kuo CH. The Quantitative Evaluation of Automatic Segmentation in Lumbar Magnetic Resonance Images. Neurospine 2024; 21:665-675. [PMID: 38955536 PMCID: PMC11224749 DOI: 10.14245/ns.2448060.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Revised: 02/23/2024] [Accepted: 02/25/2024] [Indexed: 07/04/2024] Open
Abstract
OBJECTIVE This study aims to overcome challenges in lumbar spine imaging, particularly lumbar spinal stenosis, by developing an automated segmentation model using advanced techniques. Traditional manual measurement and lesion detection methods are limited by subjectivity and inefficiency. The objective is to create an accurate and automated segmentation model that identifies anatomical structures in lumbar spine magnetic resonance imaging scans. METHODS Leveraging a dataset of 539 lumbar spinal stenosis patients, the study utilizes the residual U-Net for semantic segmentation in sagittal and axial lumbar spine magnetic resonance images. The model, trained to recognize specific tissue categories, employs a geometry algorithm for anatomical structure quantification. Validation metrics, like Intersection over Union (IOU) and Dice coefficients, validate the residual U-Net's segmentation accuracy. A novel rotation matrix approach is introduced for detecting bulging discs, assessing dural sac compression, and measuring yellow ligament thickness. RESULTS The residual U-Net achieves high precision in segmenting lumbar spine structures, with mean IOU values ranging from 0.82 to 0.93 across various tissue categories and views. The automated quantification system provides measurements for intervertebral disc dimensions, dural sac diameter, yellow ligament thickness, and disc hydration. Consistency between training and testing datasets assures the robustness of automated measurements. CONCLUSION Automated lumbar spine segmentation with residual U-Net and deep learning exhibits high precision in identifying anatomical structures, facilitating efficient quantification in lumbar spinal stenosis cases. The introduction of a rotation matrix enhances lesion detection, promising improved diagnostic accuracy, and supporting treatment decisions for lumbar spinal stenosis patients.
Collapse
Affiliation(s)
- Yao-Wen Liang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yu-Ting Fang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Biomedical Engineering, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu County, Taiwan
| | - Ting-Chun Lin
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
- RadiRad Co., Ltd., New Taipei City, Taiwan
| | - Cheng-Ru Yang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Artificial Intelligence in Healthcare, International Academia of Biomedical Innovation Technology, Reno, NV, USA
| | - Chih-Chang Chang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Hsuan-Kan Chang
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chin-Chu Ko
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Tsung-Hsi Tu
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Li-Yu Fay
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Jau-Ching Wu
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wen-Cheng Huang
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Hsiang-Wei Hu
- Biomedical Technology and Device Research Laboratories, Industrial Technology Research Institute, Hsinchu County, Taiwan
| | - You-Yin Chen
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Ph.D. Program in Medical Neuroscience, College of Medical Science and Technology, Taipei Medical University, New Taipei City, Taiwan
| | - Chao-Hung Kuo
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| |
Collapse
|
4
|
Machura B, Kucharski D, Bozek O, Eksner B, Kokoszka B, Pekala T, Radom M, Strzelczak M, Zarudzki L, Gutiérrez-Becker B, Krason A, Tessier J, Nalepa J. Deep learning ensembles for detecting brain metastases in longitudinal multi-modal MRI studies. Comput Med Imaging Graph 2024; 116:102401. [PMID: 38795690 DOI: 10.1016/j.compmedimag.2024.102401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 05/13/2024] [Accepted: 05/13/2024] [Indexed: 05/28/2024]
Abstract
Metastatic brain cancer is a condition characterized by the migration of cancer cells to the brain from extracranial sites. Notably, metastatic brain tumors surpass primary brain tumors in prevalence by a significant factor, they exhibit an aggressive growth potential and have the capacity to spread across diverse cerebral locations simultaneously. Magnetic resonance imaging (MRI) scans of individuals afflicted with metastatic brain tumors unveil a wide spectrum of characteristics. These lesions vary in size and quantity, spanning from tiny nodules to substantial masses captured within MRI. Patients may present with a limited number of lesions or an extensive burden of hundreds of them. Moreover, longitudinal studies may depict surgical resection cavities, as well as areas of necrosis or edema. Thus, the manual analysis of such MRI scans is difficult, user-dependent and cost-inefficient, and - importantly - it lacks reproducibility. We address these challenges and propose a pipeline for detecting and analyzing brain metastases in longitudinal studies, which benefits from an ensemble of various deep learning architectures originally designed for different downstream tasks (detection and segmentation). The experiments, performed over 275 multi-modal MRI scans of 87 patients acquired in 53 sites, coupled with rigorously validated manual annotations, revealed that our pipeline, built upon open-source tools to ensure its reproducibility, offers high-quality detection, and allows for precisely tracking the disease progression. To objectively quantify the generalizability of models, we introduce a new data stratification approach that accommodates the heterogeneity of the dataset and is used to elaborate training-test splits in a data-robust manner, alongside a new set of quality metrics to objectively assess algorithms. Our system provides a fully automatic and quantitative approach that may support physicians in a laborious process of disease progression tracking and evaluation of treatment efficacy.
Collapse
Affiliation(s)
| | - Damian Kucharski
- Graylight Imaging, Gliwice, Poland; Silesian University of Technology, Gliwice, Poland.
| | - Oskar Bozek
- Department of Radiodiagnostics and Invasive Radiology, School of Medicine in Katowice, Medical University of Silesia in Katowice, Katowice, Poland.
| | - Bartosz Eksner
- Department of Radiology and Nuclear Medicine, ZSM Chorzów, Chorzów, Poland.
| | - Bartosz Kokoszka
- Department of Radiodiagnostics and Invasive Radiology, School of Medicine in Katowice, Medical University of Silesia in Katowice, Katowice, Poland.
| | - Tomasz Pekala
- Department of Radiodiagnostics, Interventional Radiology and Nuclear Medicine, University Clinical Centre, Katowice, Poland.
| | - Mateusz Radom
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Marek Strzelczak
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Lukasz Zarudzki
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Benjamín Gutiérrez-Becker
- Roche Pharma Research and Early Development, Informatics, Roche Innovation Center Basel, Basel, Switzerland.
| | - Agata Krason
- Roche Pharma Research and Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland.
| | - Jean Tessier
- Roche Pharma Research and Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland.
| | - Jakub Nalepa
- Graylight Imaging, Gliwice, Poland; Silesian University of Technology, Gliwice, Poland.
| |
Collapse
|
5
|
Kim M, Wang JY, Lu W, Jiang H, Stojadinovic S, Wardak Z, Dan T, Timmerman R, Wang L, Chuang C, Szalkowski G, Liu L, Pollom E, Rahimy E, Soltys S, Chen M, Gu X. Where Does Auto-Segmentation for Brain Metastases Radiosurgery Stand Today? Bioengineering (Basel) 2024; 11:454. [PMID: 38790322 PMCID: PMC11117895 DOI: 10.3390/bioengineering11050454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 04/26/2024] [Accepted: 04/30/2024] [Indexed: 05/26/2024] Open
Abstract
Detection and segmentation of brain metastases (BMs) play a pivotal role in diagnosis, treatment planning, and follow-up evaluations for effective BM management. Given the rising prevalence of BM cases and its predominantly multiple onsets, automated segmentation is becoming necessary in stereotactic radiosurgery. It not only alleviates the clinician's manual workload and improves clinical workflow efficiency but also ensures treatment safety, ultimately improving patient care. Recent strides in machine learning, particularly in deep learning (DL), have revolutionized medical image segmentation, achieving state-of-the-art results. This review aims to analyze auto-segmentation strategies, characterize the utilized data, and assess the performance of cutting-edge BM segmentation methodologies. Additionally, we delve into the challenges confronting BM segmentation and share insights gleaned from our algorithmic and clinical implementation experiences.
Collapse
Affiliation(s)
- Matthew Kim
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Jen-Yeu Wang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Weiguo Lu
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Hao Jiang
- NeuralRad LLC, Madison, WI 53717, USA
| | | | - Zabi Wardak
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Tu Dan
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Robert Timmerman
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Lei Wang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Cynthia Chuang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Gregory Szalkowski
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Lianli Liu
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Erqi Pollom
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Elham Rahimy
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Scott Soltys
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Mingli Chen
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Xuejun Gu
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
6
|
Yun S, Park JE, Kim N, Park SY, Kim HS. Reducing false positives in deep learning-based brain metastasis detection by using both gradient-echo and spin-echo contrast-enhanced MRI: validation in a multi-center diagnostic cohort. Eur Radiol 2024; 34:2873-2884. [PMID: 37891415 DOI: 10.1007/s00330-023-10318-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 08/08/2023] [Accepted: 08/18/2023] [Indexed: 10/29/2023]
Abstract
OBJECTIVES To develop a deep learning (DL) for detection of brain metastasis (BM) that incorporates both gradient- and turbo spin-echo contrast-enhanced MRI (dual-enhanced DL) and evaluate it in a clinical cohort in comparison with human readers and DL using gradient-echo-based imaging only (GRE DL). MATERIALS AND METHODS DL detection was developed using data from 200 patients with BM (training set) and tested in 62 (internal) and 48 (external) consecutive patients who underwent stereotactic radiosurgery and diagnostic dual-enhanced imaging (dual-enhanced DL) and later guide GRE imaging (GRE DL). The detection sensitivity and positive predictive value (PPV) were compared between two DLs. Two neuroradiologists independently analyzed BM and reference standards for BM were separately drawn by another neuroradiologist. The relative differences (RDs) from the reference standard BM numbers were compared between the DLs and neuroradiologists. RESULTS Sensitivity was similar between GRE DL (93%, 95% confidence interval [CI]: 90-96%) and dual-enhanced DL (92% [89-94%]). The PPV of the dual-enhanced DL was higher (89% [86-92%], p < .001) than that of GRE DL (76%, [72-80%]). GRE DL significantly overestimated the number of metastases (false positives; RD: 0.05, 95% CI: 0.00-0.58) compared with neuroradiologists (RD: 0.00, 95% CI: - 0.28, 0.15, p < .001), whereas dual-enhanced DL (RD: 0.00, 95% CI: 0.00-0.15) did not show a statistically significant difference from neuroradiologists (RD: 0.00, 95% CI: - 0.20-0.10, p = .913). CONCLUSION The dual-enhanced DL showed improved detection of BM and reduced overestimation compared with GRE DL, achieving similar performance to neuroradiologists. CLINICAL RELEVANCE STATEMENT The use of deep learning-based brain metastasis detection with turbo spin-echo imaging reduces false positive detections, aiding in the guidance of stereotactic radiosurgery when gradient-echo imaging alone is employed. KEY POINTS •Deep learning for brain metastasis detection improved by using both gradient- and turbo spin-echo contrast-enhanced MRI (dual-enhanced deep learning). •Dual-enhanced deep learning increased true positive detections and reduced overestimation. •Dual-enhanced deep learning achieved similar performance to neuroradiologists for brain metastasis counts.
Collapse
Affiliation(s)
- Suyoung Yun
- Department of Radiology, Busan Paik Hospital, Inje University College of Medicine, Busan, Republic of Korea
| | - Ji Eun Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-Ro 88, Songpa-Gu, Seoul, 05505, Republic of Korea.
| | | | - Seo Young Park
- Department of Statistics and Data Science, Korea National Open University, Seoul, Republic of Korea
| | - Ho Sung Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-Ro 88, Songpa-Gu, Seoul, 05505, Republic of Korea
| |
Collapse
|
7
|
Cakir M, Tulum G, Cuce F, Yilmaz KB, Aralasmak A, Isik Mİ, Canbolat H. Differential Diagnosis of Diabetic Foot Osteomyelitis and Charcot Neuropathic Osteoarthropathy with Deep Learning Methods. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01067-0. [PMID: 38491234 DOI: 10.1007/s10278-024-01067-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 02/26/2024] [Accepted: 02/28/2024] [Indexed: 03/18/2024]
Abstract
Our study aims to evaluate the potential of a deep learning (DL) algorithm for differentiating the signal intensity of bone marrow between osteomyelitis (OM), Charcot neuropathic osteoarthropathy (CNO), and trauma (TR). The local ethics committee approved this retrospective study. From 148 patients, segmentation resulted in 679 labeled regions for T1-weighted images (comprising 151 CNO, 257 OM, and 271 TR) and 714 labeled regions for T2-weighted images (consisting of 160 CNO, 272 OM, and 282 TR). We employed both multi-class classification (MCC) and binary-class classification (BCC) approaches to compare the classification outcomes of CNO, TR, and OM. The ResNet-50 and the EfficientNet-b0 accuracy values were computed at 96.2% and 97.1%, respectively, for T1-weighted images. Additionally, accuracy values for ResNet-50 and the EfficientNet-b0 were determined at 95.6% and 96.8%, respectively, for T2-weighted images. Also, according to BCC for CNO, OM, and TR, the sensitivity of ResNet-50 is 91.1%, 92.4%, and 96.6% and the sensitivity of EfficientNet-b0 is 93.2%, 97.6%, and 98.1% for T1, respectively. For CNO, OM, and TR, the sensitivity of ResNet-50 is 94.9%, 83.6%, and 97.9% and the sensitivity of EfficientNet-b0 is 95.6%, 85.2%, and 98.6% for T2, respectively. The specificity values of ResNet-50 for CNO, OM, and TR in T1-weighted images are 98.1%, 97.9%, and 94.7% and 98.6%, 97.5%, and 96.7% in T2-weighted images respectively. Similarly, for EfficientNet-b0, the specificity values are 98.9%, 98.7%, and 98.4% and 99.1%, 98.5%, and 98.7% for T1-weighted and T2-weighted images respectively. In the diabetic foot, deep learning methods serve as a non-invasive tool to differentiate CNO, OM, and TR with high accuracy.
Collapse
Affiliation(s)
- Maide Cakir
- Department of Electrical Engineering, Faculty of Engineering and Natural Sciences, Bandirma Onyedi Eylul University, Balikesir, Turkey.
| | - Gökalp Tulum
- Department of Electrical and Electronics Engineering, Istanbul Topkapi University, Engineering Faculty, Istanbul, Turkey
| | - Ferhat Cuce
- Department of Radiology, Health Science University, Gulhane Training, and Research Hospital, Ankara, Turkey
| | - Kerim Bora Yilmaz
- Department of General Surgery, Health Science University, Gulhane Training and Research, Ankara, Turkey
| | - Ayse Aralasmak
- Department of Radiology, Liv Hospital Vadi, Istanbul, Turkey
| | - Muhammet İkbal Isik
- Department of Radiology, Health Sciences University, Gulhane Training and Research Hospital, Ankara, Turkey
| | - Hüseyin Canbolat
- Department of Electrical and Electronics Engineering, Faculty of Engineering and Natural Sciences, Ankara Yildirim Beyazit University, Ankara, Turkey
| |
Collapse
|
8
|
Kikuchi K, Togao O, Yamashita K, Momosaka D, Kikuchi Y, Kuga D, Yuhei S, Fujioka Y, Narutomi F, Obara M, Yoshimoto K, Ishigami K. Comparison of diagnostic performance of radiologist- and AI-based assessments of T2-FLAIR mismatch sign and quantitative assessment using synthetic MRI in the differential diagnosis between astrocytoma, IDH-mutant and oligodendroglioma, IDH-mutant and 1p/19q-codeleted. Neuroradiology 2024; 66:333-341. [PMID: 38224343 PMCID: PMC10859342 DOI: 10.1007/s00234-024-03288-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 01/07/2024] [Indexed: 01/16/2024]
Abstract
PURPOSE This study aimed to compare assessments by radiologists, artificial intelligence (AI), and quantitative measurement using synthetic MRI (SyMRI) for differential diagnosis between astrocytoma, IDH-mutant and oligodendroglioma, and IDH-mutant and 1p/19q-codeleted and to identify the superior method. METHODS Thirty-three cases (men, 14; women, 19) comprising 19 astrocytomas and 14 oligodendrogliomas were evaluated. Four radiologists independently evaluated the presence of the T2-FLAIR mismatch sign. A 3D convolutional neural network (CNN) model was trained using 50 patients outside the test group (28 astrocytomas and 22 oligodendrogliomas) and transferred to evaluate the T2-FLAIR mismatch lesions in the test group. If the CNN labeled more than 50% of the T2-prolonged lesion area, the result was considered positive. The T1/T2-relaxation times and proton density (PD) derived from SyMRI were measured in both gliomas. Each quantitative parameter (T1, T2, and PD) was compared between gliomas using the Mann-Whitney U-test. Receiver-operating characteristic analysis was used to evaluate the diagnostic performance. RESULTS The mean sensitivity, specificity, and area under the curve (AUC) of radiologists vs. AI were 76.3% vs. 94.7%; 100% vs. 92.9%; and 0.880 vs. 0.938, respectively. The two types of diffuse gliomas could be differentiated using a cutoff value of 2290/128 ms for a combined 90th percentile of T1 and 10th percentile of T2 relaxation times with 94.4/100% sensitivity/specificity with an AUC of 0.981. CONCLUSION Compared to the radiologists' assessment using the T2-FLAIR mismatch sign, the AI and the SyMRI assessments increased both sensitivity and objectivity, resulting in improved diagnostic performance in differentiating gliomas.
Collapse
Affiliation(s)
- Kazufumi Kikuchi
- Department of Molecular Imaging and Diagnosis, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan.
| | - Osamu Togao
- Department of Molecular Imaging and Diagnosis, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Koji Yamashita
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Daichi Momosaka
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Yoshitomo Kikuchi
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Daisuke Kuga
- Department of Neurosurgery, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Sangatsuda Yuhei
- Department of Neurosurgery, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Yutaka Fujioka
- Department of Neurosurgery, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Fumiya Narutomi
- Department of Anatomic Pathology, Pathological Sciences, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Makoto Obara
- Philips Japan Ltd., 2-13-37, Konan, Minato-Ku, Tokyo, 108-8507, Japan
| | - Koji Yoshimoto
- Department of Neurosurgery, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| | - Kousei Ishigami
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-Ku, Fukuoka, 812-8582, Japan
| |
Collapse
|
9
|
Rudie JD, Saluja R, Weiss DA, Nedelec P, Calabrese E, Colby JB, Laguna B, Mongan J, Braunstein S, Hess CP, Rauschecker AM, Sugrue LP, Villanueva-Meyer JE. The University of California San Francisco Brain Metastases Stereotactic Radiosurgery (UCSF-BMSR) MRI Dataset. Radiol Artif Intell 2024; 6:e230126. [PMID: 38381038 PMCID: PMC10982817 DOI: 10.1148/ryai.230126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 01/11/2024] [Accepted: 02/05/2024] [Indexed: 02/22/2024]
Abstract
Supplemental material is available for this article.
Collapse
Affiliation(s)
- Jeffrey D. Rudie
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | | | - David A. Weiss
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - Pierre Nedelec
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - Evan Calabrese
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - John B. Colby
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - Benjamin Laguna
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - John Mongan
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - Steve Braunstein
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - Christopher P. Hess
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - Andreas M. Rauschecker
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - Leo P. Sugrue
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| | - Javier E. Villanueva-Meyer
- From the Center for Intelligent Imaging, Department of Radiology and Biomedical Imaging (J.D.R., D.A.W., P.N., E.C., J.B.C., B.L., J.M., C.P.H., A.M.R., L.P.S., J.E.V.M.) and Department of Radiation Oncology (S.B.), University of California San Francisco, 513 Parnassus Ave, Rm S-261, Box 0628, San Francisco, CA 94143-0628; Department of Radiology, University of California San Diego, San Diego Calif (J.D.R.); Department of Radiology, University of Pennsylvania, Philadelphia, Pa (R.S.); and Department of Radiology, Duke University School of Medicine, Durham, NC (E.C.)
| |
Collapse
|
10
|
Jeong H, Park JE, Kim N, Yoon SK, Kim HS. Deep learning-based detection and quantification of brain metastases on black-blood imaging can provide treatment suggestions: a clinical cohort study. Eur Radiol 2024; 34:2062-2071. [PMID: 37658885 PMCID: PMC10873231 DOI: 10.1007/s00330-023-10120-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 06/25/2023] [Accepted: 07/01/2023] [Indexed: 09/05/2023]
Abstract
OBJECTIVES We aimed to evaluate whether deep learning-based detection and quantification of brain metastasis (BM) may suggest treatment options for patients with BMs. METHODS The deep learning system (DLS) for detection and quantification of BM was developed in 193 patients and applied to 112 patients that were newly detected on black-blood contrast-enhanced T1-weighted imaging. Patients were assigned to one of 3 treatment suggestion groups according to the European Association of Neuro-Oncology (EANO)-European Society for Medical Oncology (ESMO) recommendations using number and volume of the BMs detected by the DLS: short-term imaging follow-up without treatment (group A), surgery or stereotactic radiosurgery (limited BM, group B), or whole-brain radiotherapy or systemic chemotherapy (extensive BM, group C). The concordance between the DLS-based groups and clinical decisions was analyzed with or without consideration of targeted agents. The performance of distinguishing high-risk (B + C) was calculated. RESULTS Among 112 patients (mean age 64.3 years, 63 men), group C had the largest number and volume of BM, followed by group B (4.4 and 851.6 mm3) and A (1.5 and 15.5 mm3). The DLS-based groups were concordant with the actual clinical decisions, with an accuracy of 76.8% (86 of 112). Modified accuracy considering targeted agents was 81.3% (91 of 112). The DLS showed 95% (82/86) sensitivity and 81% (21/26) specificity for distinguishing the high risk. CONCLUSION DLS-based detection and quantification of BM have the potential to be helpful in the determination of treatment options for both low- and high-risk groups of limited and extensive BMs. CLINICAL RELEVANCE STATEMENT For patients with newly diagnosed brain metastasis, deep learning-based detection and quantification may be used in clinical settings where prompt and accurate treatment decisions are required, which can lead to better patient outcomes. KEY POINTS • Deep learning-based brain metastasis detection and quantification showed excellent agreement with ground-truth classifications. • By setting an algorithm to suggest treatment based on the number and volume of brain metastases detected by the deep learning system, the concordance was 81.3%. • When dividing patients into low- and high-risk groups, the sensitivity for detecting the latter was 95%.
Collapse
Affiliation(s)
- Hana Jeong
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-ro 88, Songpa-Gu, 05505, Seoul, Korea
| | - Ji Eun Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-ro 88, Songpa-Gu, 05505, Seoul, Korea.
| | | | - Shin-Kyo Yoon
- Department of Oncology, Asan Medical Center, Seoul, South Korea
| | - Ho Sung Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 43 Olympic-ro 88, Songpa-Gu, 05505, Seoul, Korea
| |
Collapse
|
11
|
Hammer Y, Najjar W, Kahanov L, Joskowicz L, Shoshan Y. Two is better than one: longitudinal detection and volumetric evaluation of brain metastases after Stereotactic Radiosurgery with a deep learning pipeline. J Neurooncol 2024; 166:547-555. [PMID: 38300389 PMCID: PMC10876809 DOI: 10.1007/s11060-024-04580-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 01/18/2024] [Indexed: 02/02/2024]
Abstract
PURPOSE Close MRI surveillance of patients with brain metastases following Stereotactic Radiosurgery (SRS) treatment is essential for assessing treatment response and the current disease status in the brain. This follow-up necessitates the comparison of target lesion sizes in pre- (prior) and post-SRS treatment (current) T1W-Gad MRI scans. Our aim was to evaluate SimU-Net, a novel deep-learning model for the detection and volumetric analysis of brain metastases and their temporal changes in paired prior and current scans. METHODS SimU-Net is a simultaneous multi-channel 3D U-Net model trained on pairs of registered prior and current scans of a patient. We evaluated its performance on 271 pairs of T1W-Gad MRI scans from 226 patients who underwent SRS. An expert oncological neurosurgeon manually delineated 1,889 brain metastases in all the MRI scans (1,368 with diameters > 5 mm, 834 > 10 mm). The SimU-Net model was trained/validated on 205 pairs from 169 patients (1,360 metastases) and tested on 66 pairs from 57 patients (529 metastases). The results were then compared to the ground truth delineations. RESULTS SimU-Net yielded a mean (std) detection precision and recall of 1.00±0.00 and 0.99±0.06 for metastases > 10 mm, 0.90±0.22 and 0.97±0.12 for metastases > 5 mm of, and 0.76±0.27 and 0.94±0.16 for metastases of all sizes. It improves lesion detection precision by 8% for all metastases sizes and by 12.5% for metastases < 10 mm with respect to standalone 3D U-Net. The segmentation Dice scores were 0.90±0.10, 0.89±0.10 and 0.89±0.10 for the above metastases sizes, all above the observer variability of 0.80±0.13. CONCLUSION Automated detection and volumetric quantification of brain metastases following SRS have the potential to enhance the assessment of treatment response and alleviate the clinician workload.
Collapse
Affiliation(s)
- Yonny Hammer
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Edmond. J. Safra Campus, Givat Ram, 9190401, Jerusalem, Israel
| | - Wenad Najjar
- Department of Neurosurgery, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Lea Kahanov
- Department of Neurosurgery, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Leo Joskowicz
- School of Computer Science and Engineering, The Hebrew University of Jerusalem, Edmond. J. Safra Campus, Givat Ram, 9190401, Jerusalem, Israel.
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel.
| | - Yigal Shoshan
- Department of Neurosurgery, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| |
Collapse
|
12
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RLJ, Liu T, Wang T, Yang X. Deep learning in MRI-guided radiation therapy: A systematic review. J Appl Clin Med Phys 2024; 25:e14155. [PMID: 37712893 PMCID: PMC10860468 DOI: 10.1002/acm2.14155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/10/2023] [Accepted: 08/21/2023] [Indexed: 09/16/2023] Open
Abstract
Recent advances in MRI-guided radiation therapy (MRgRT) and deep learning techniques encourage fully adaptive radiation therapy (ART), real-time MRI monitoring, and the MRI-only treatment planning workflow. Given the rapid growth and emergence of new state-of-the-art methods in these fields, we systematically review 197 studies written on or before December 31, 2022, and categorize the studies into the areas of image segmentation, image synthesis, radiomics, and real time MRI. Building from the underlying deep learning methods, we discuss their clinical importance and current challenges in facilitating small tumor segmentation, accurate x-ray attenuation information from MRI, tumor characterization and prognosis, and tumor motion tracking. In particular, we highlight the recent trends in deep learning such as the emergence of multi-modal, visual transformer, and diffusion models.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Richard L. J. Qiu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | - Tonghe Wang
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| |
Collapse
|
13
|
Wang TW, Hsu MS, Lee WK, Pan HC, Yang HC, Lee CC, Wu YT. Brain metastasis tumor segmentation and detection using deep learning algorithms: A systematic review and meta-analysis. Radiother Oncol 2024; 190:110007. [PMID: 37967585 DOI: 10.1016/j.radonc.2023.110007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 10/15/2023] [Accepted: 11/08/2023] [Indexed: 11/17/2023]
Abstract
BACKGROUND Manual detection of brain metastases is both laborious and inconsistent, driving the need for more efficient solutions. Accordingly, our systematic review and meta-analysis assessed the efficacy of deep learning algorithms in detecting and segmenting brain metastases from various primary origins in MRI images. METHODS We conducted a comprehensive search of PubMed, Embase, and Web of Science up to May 24, 2023, which yielded 42 relevant studies for our analysis. We assessed the quality of these studies using the QUADAS-2 and CLAIM tools. Using a random-effect model, we calculated the pooled lesion-wise dice score as well as patient-wise and lesion-wise sensitivity. We performed subgroup analyses to investigate the influence of factors such as publication year, study design, training center of the model, validation methods, slice thickness, model input dimensions, MRI sequences fed to the model, and the specific deep learning algorithms employed. Additionally, meta-regression analyses were carried out considering the number of patients in the studies, count of MRI manufacturers, count of MRI models, training sample size, and lesion number. RESULTS Our analysis highlighted that deep learning models, particularly the U-Net and its variants, demonstrated superior segmentation accuracy. Enhanced detection sensitivity was observed with an increased diversity in MRI hardware, both in terms of manufacturer and model variety. Furthermore, slice thickness was identified as a significant factor influencing lesion-wise detection sensitivity. Overall, the pooled results indicated a lesion-wise dice score of 79%, with patient-wise and lesion-wise sensitivities at 86% and 87%, respectively. CONCLUSIONS The study underscores the potential of deep learning in improving brain metastasis diagnostics and treatment planning. Still, more extensive cohorts and larger meta-analysis are needed for more practical and generalizable algorithms. Future research should prioritize these areas to advance the field. This study was funded by the Gen. & Mrs. M.C. Peng Fellowship and registered under PROSPERO (CRD42023427776).
Collapse
Affiliation(s)
- Ting-Wei Wang
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan; School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ming-Sheng Hsu
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wei-Kai Lee
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
| | - Hung-Chuan Pan
- Department of Neurosurgery, Taichung Veterans General Hospital, Taichung, Taiwan; Department of Medical Research, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Huai-Che Yang
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Cheng-Chia Lee
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan; National Yang Ming Chiao Tung University, Brain Research Center, Taiwan; National Yang Ming Chiao Tung University, College Medical Device Innovation and Translation Center, Taiwan.
| |
Collapse
|
14
|
Wang J, Peng Y, Jing S, Han L, Li T, Luo J. A deep-learning approach for segmentation of liver tumors in magnetic resonance imaging using UNet+. BMC Cancer 2023; 23:1060. [PMID: 37923988 PMCID: PMC10623778 DOI: 10.1186/s12885-023-11432-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 09/21/2023] [Indexed: 11/06/2023] Open
Abstract
OBJECTIVE Radiomic and deep learning studies based on magnetic resonance imaging (MRI) of liver tumor are gradually increasing. Manual segmentation of normal hepatic tissue and tumor exhibits limitations. METHODS 105 patients diagnosed with hepatocellular carcinoma were retrospectively studied between Jan 2015 and Dec 2020. The patients were divided into three sets: training (n = 83), validation (n = 11), and internal testing (n = 11). Additionally, 9 cases were included from the Cancer Imaging Archive as the external test set. Using the arterial phase and T2WI sequences, expert radiologists manually delineated all images. Using deep learning, liver tumors and liver segments were automatically segmented. A preliminary liver segmentation was performed using the UNet + + network, and the segmented liver mask was re-input as the input end into the UNet + + network to segment liver tumors. The false positivity rate was reduced using a threshold value in the liver tumor segmentation. To evaluate the segmentation results, we calculated the Dice similarity coefficient (DSC), average false positivity rate (AFPR), and delineation time. RESULTS The average DSC of the liver in the validation and internal testing sets was 0.91 and 0.92, respectively. In the validation set, manual and automatic delineation took 182.9 and 2.2 s, respectively. On an average, manual and automatic delineation took 169.8 and 1.7 s, respectively. The average DSC of liver tumors was 0.612 and 0.687 in the validation and internal testing sets, respectively. The average time for manual and automatic delineation and AFPR in the internal testing set were 47.4 s, 2.9 s, and 1.4, respectively, and those in the external test set were 29.5 s, 4.2 s, and 1.6, respectively. CONCLUSION UNet + + can automatically segment normal hepatic tissue and liver tumors based on MR images. It provides a methodological basis for the automated segmentation of liver tumors, improves the delineation efficiency, and meets the requirement of extraction set analysis of further radiomics and deep learning.
Collapse
Affiliation(s)
- Jing Wang
- Department of General medicine, The First Medical Center Department of Chinese PLA General Hospital, Peking, 100039, China
| | - Yanyang Peng
- Department of Radiology, First Medical Center of General Hospital of People's Liberation Army, Peking, China
| | - Shi Jing
- Department of Oncology, Huaihe Hospital, Henan University, Kaifeng, 475000, China
| | - Lujun Han
- Department of Radiology, State Key Laboratory of Oncology in South China, Collaborative Innovation Cancer for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510030, China.
- Translational Medical Center of Huaihe Hospital, Henan University, 115 West Gate Street, Kaifeng, 475000, China.
| | - Tian Li
- School of Basic Medicine, Fourth Military Medical University, Xi'an, 710032, China.
- Translational Medical Center of Huaihe Hospital, Henan University, 115 West Gate Street, Kaifeng, 475000, China.
| | - Junpeng Luo
- Translational Medical Center of Huaihe Hospital, Henan University, 115 West Gate Street, Kaifeng, 475000, China.
- Academy for Advanced Interdisciplinary Studies, Henan University, Zhengzhou, 450046, China.
| |
Collapse
|
15
|
Chen J, Meng L, Bu C, Zhang C, Wu P. Feature pyramid network-based computer-aided detection and monitoring treatment response of brain metastases on contrast-enhanced MRI. Clin Radiol 2023; 78:e808-e814. [PMID: 37573242 DOI: 10.1016/j.crad.2023.07.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 07/06/2023] [Accepted: 07/12/2023] [Indexed: 08/14/2023]
Abstract
AIM To investigate the value of feature pyramid network (FPN)-based computer-aided detection (CAD) of brain metastases (BMs) before and after non-surgical treatment, and to evaluate its performance in monitoring treatment response of BM on contrast-enhanced (CE) magnetic resonance imaging (MRI). MATERIAL AND METHODS Eighty-five cancer patients newly diagnosed with BM who had undergone initial and follow-up three-dimensional (3D) CE MRI at Liaocheng People's Hospital were included retrospectively in this study. Manual detection (MD) was performed by reviewer 1. Computer-aided detection (CAD) was performed by reviewer 2 using uAI Discover-BMs software. The treatment response was assessed by the two reviewers for each patient separately. A paired chi-square test was used to compare the differences in the detection of BM between MD and CAD. Agreement between MD and CAD in monitoring treatment response was assessed by kappa test. RESULTS The sensitivities of MD and CAD on initial 3D CE MRI were 78.65% and 99.13%, respectively. The sensitivities of MD and CAD on follow-up 3D CE MRI were 76.32% and 98.24%, respectively. There was a very good agreement between Reviewer 1 and Reviewer 2 in evaluating the treatment response of BM. CONCLUSION FPN-based CAD has a higher sensitivity of close to 100% and lower false negatives (FNs) for BM detection, compared to MD. Although CAD had a few shortcomings in reflecting changes of BMs after treatment, it had high performance in monitoring treatment response of BM on CE MRI.
Collapse
Affiliation(s)
- J Chen
- Department of MR, Liaocheng People's Hospital, Liaocheng, Shandong Province, 252000, China.
| | - L Meng
- Department of Radiotherapy, Liaocheng People's Hospital, Liaocheng, Shandong Province, 252000, China
| | - C Bu
- Department of MR, Liaocheng People's Hospital, Liaocheng, Shandong Province, 252000, China
| | - C Zhang
- Department of MR, Liaocheng People's Hospital, Liaocheng, Shandong Province, 252000, China
| | - P Wu
- Philips Healthcare, Shanghai, 200072, China
| |
Collapse
|
16
|
Qu J, Zhang W, Shu X, Wang Y, Wang L, Xu M, Yao L, Hu N, Tang B, Zhang L, Lui S. Construction and evaluation of a gated high-resolution neural network for automatic brain metastasis detection and segmentation. Eur Radiol 2023; 33:6648-6658. [PMID: 37186214 DOI: 10.1007/s00330-023-09648-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 01/23/2023] [Accepted: 02/08/2023] [Indexed: 05/17/2023]
Abstract
OBJECTIVES To construct and evaluate a gated high-resolution convolutional neural network for detecting and segmenting brain metastasis (BM). METHODS This retrospective study included craniocerebral MRI scans of 1392 patients with 14,542 BMs and 200 patients with no BM between January 2012 and April 2022. A primary dataset including 1000 cases with 11,686 BMs was employed to construct the model, while an independent dataset including 100 cases with 1069 BMs from other hospitals was used to examine the generalizability. The potential of the model for clinical use was also evaluated by comparing its performance in BM detection and segmentation to that of radiologists, and comparing radiologists' lesion detecting performances with and without model assistance. RESULTS Our model yielded a recall of 0.88, a dice similarity coefficient (DSC) of 0.90, a positive predictive value (PPV) of 0.93 and a false positives per patient (FP) of 1.01 in the test set, and a recall of 0.85, a DSC of 0.89, a PPV of 0.93, and a FP of 1.07 in dataset from other hospitals. With the model's assistance, the BM detection rates of 4 radiologists improved significantly, ranging from 5.2 to 15.1% (all p < 0.001), and also for detecting small BMs with diameter ≤ 5 mm (ranging from 7.2 to 27.0%, all p < 0.001). CONCLUSIONS The proposed model enables accurate BM detection and segmentation with higher sensitivity and less time consumption, showing the potential to augment radiologists' performance in detecting BM. CLINICAL RELEVANCE STATEMENT This study offers a promising computer-aided tool to assist the brain metastasis detection and segmentation in routine clinical practice for cancer patients. KEY POINTS • The GHR-CNN could accurately detect and segment BM on contrast-enhanced 3D-T1W images. • The GHR-CNN improved the BM detection rate of radiologists, including the detection of small lesions. • The GHR-CNN enabled automated segmentation of BM in a very short time.
Collapse
Affiliation(s)
- Jiao Qu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Wenjing Zhang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Xin Shu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Ying Wang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
- Department of Nuclear Medicine, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Lituan Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Mengyuan Xu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Li Yao
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Na Hu
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Biqiu Tang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Su Lui
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Xiang, Chengdu, 610041, China.
| |
Collapse
|
17
|
Heyn C, Moody AR, Tseng CL, Wong E, Kang T, Kapadia A, Howard P, Maralani P, Symons S, Goubran M, Martel A, Chen H, Myrehaug S, Detsky J, Sahgal A, Soliman H. Segmentation of Brain Metastases Using Background Layer Statistics (BLAST). AJNR Am J Neuroradiol 2023; 44:1135-1143. [PMID: 37735088 PMCID: PMC10549939 DOI: 10.3174/ajnr.a7998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 08/16/2023] [Indexed: 09/23/2023]
Abstract
BACKGROUND AND PURPOSE Accurate segmentation of brain metastases is important for treatment planning and evaluating response. The aim of this study was to assess the performance of a semiautomated algorithm for brain metastases segmentation using Background Layer Statistics (BLAST). MATERIALS AND METHODS Nineteen patients with 48 parenchymal and dural brain metastases were included. Segmentation was performed by 4 neuroradiologists and 1 radiation oncologist. K-means clustering was used to identify normal gray and white matter (background layer) in a 2D parameter space of signal intensities from postcontrast T2 FLAIR and T1 MPRAGE sequences. The background layer was subtracted and operator-defined thresholds were applied in parameter space to segment brain metastases. The remaining voxels were back-projected to visualize segmentations in image space and evaluated by the operators. Segmentation performance was measured by calculating the Dice-Sørensen coefficient and Hausdorff distance using ground truth segmentations made by the investigators. Contours derived from the segmentations were evaluated for clinical acceptance using a 5-point Likert scale. RESULTS The median Dice-Sørensen coefficient was 0.82 for all brain metastases and 0.9 for brain metastases of ≥10 mm. The median Hausdorff distance was 1.4 mm. Excellent interreader agreement for brain metastases volumes was found with an intraclass correlation coefficient = 0.9978. The median segmentation time was 2.8 minutes/metastasis. Forty-five contours (94%) had a Likert score of 4 or 5, indicating that the contours were acceptable for treatment, requiring no changes or minor edits. CONCLUSIONS We show accurate and reproducible segmentation of brain metastases using BLAST and demonstrate its potential as a tool for radiation planning and evaluating treatment response.
Collapse
Affiliation(s)
- Chris Heyn
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
- Sunnybrook Research Institute (C.H., A.R.M., M.G., A.M.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Alan R Moody
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
- Sunnybrook Research Institute (C.H., A.R.M., M.G., A.M.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Chia-Lin Tseng
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Erin Wong
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Tony Kang
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Anish Kapadia
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Peter Howard
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Pejman Maralani
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Sean Symons
- From the Department of Medical Imaging (C.H., A.R.M., E.W., T.K., A.K., P.H., P.M., S.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Maged Goubran
- Sunnybrook Research Institute (C.H., A.R.M., M.G., A.M.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
- Department of Medical Biophysics (M.G., A.M.), University of Toronto, Toronto, Ontario, Canada
| | - Anne Martel
- Sunnybrook Research Institute (C.H., A.R.M., M.G., A.M.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
- Department of Medical Biophysics (M.G., A.M.), University of Toronto, Toronto, Ontario, Canada
| | - Hanbo Chen
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Sten Myrehaug
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Jay Detsky
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Arjun Sahgal
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| | - Hany Soliman
- Department of Radiation Oncology (C.-L.T., H.C., S.M., J.D., A.S., H.S.), Sunnybrook Health Sciences Center, Toronto, Ontario, Canada
| |
Collapse
|
18
|
Zhao JY, Cao Q, Chen J, Chen W, Du SY, Yu J, Zeng YM, Wang SM, Peng JY, You C, Xu JG, Wang XY. Development and validation of a fully automatic tissue delineation model for brain metastasis using a deep neural network. Quant Imaging Med Surg 2023; 13:6724-6734. [PMID: 37869331 PMCID: PMC10585546 DOI: 10.21037/qims-22-1216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 08/04/2023] [Indexed: 10/24/2023]
Abstract
Background Stereotactic radiosurgery (SRS) treatment planning requires accurate delineation of brain metastases, a task that can be tedious and time-consuming. Although studies have explored the use of convolutional neural networks (CNNs) in magnetic resonance imaging (MRI) for automatic brain metastases delineation, none of these studies have performed clinical evaluation, raising concerns about clinical applicability. This study aimed to develop an artificial intelligence (AI) tool for the automatic delineation of single brain metastasis that could be integrated into clinical practice. Methods Data from 426 patients with postcontrast T1-weighted MRIs who underwent SRS between March 2007 and August 2019 were retrospectively collected and divided into training, validation, and testing cohorts of 299, 42, and 85 patients, respectively. Two Gamma Knife (GK) surgeons contoured the brain metastases as the ground truth. A novel 2.5D CNN network was developed for single brain metastasis delineation. The mean Dice similarity coefficient (DSC) and average surface distance (ASD) were used to assess the performance of this method. Results The mean DSC and ASD values were 88.34%±5.00% and 0.35±0.21 mm, respectively, for the contours generated with the AI tool based on the testing set. The DSC measure of the AI tool's performance was dependent on metastatic shape, reinforcement shape, and the existence of peritumoral edema (all P values <0.05). The clinical experts' subjective assessments showed that 415 out of 572 slices (72.6%) in the testing cohort were acceptable for clinical usage without revision. The average time spent editing an AI-generated contour compared with time spent with manual contouring was 74 vs. 196 seconds, respectively (P<0.01). Conclusions The contours delineated with the AI tool for single brain metastasis were in close agreement with the ground truth. The developed AI tool can effectively reduce contouring time and aid in GK treatment planning of single brain metastasis in clinical practice.
Collapse
Affiliation(s)
- Jie-Yi Zhao
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| | - Qi Cao
- Department of Reproductive Medical Center, West China Second University Hospital, Sichuan University, Chengdu, China
| | - Jing Chen
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| | - Wei Chen
- Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Si-Yu Du
- West China School of Medicine, Sichuan University, Chengdu, China
| | - Jie Yu
- West China School of Public Health, Sichuan University, Chengdu, China
| | - Yi-Miao Zeng
- West China School of Medicine, Sichuan University, Chengdu, China
| | - Shu-Min Wang
- West China School of Medicine, Sichuan University, Chengdu, China
| | - Jing-Yu Peng
- West China School of Medicine, Sichuan University, Chengdu, China
| | - Chao You
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| | - Jian-Guo Xu
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| | - Xiao-Yu Wang
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
19
|
Tulum G. Novel radiomic features versus deep learning: differentiating brain metastases from pathological lung cancer types in small datasets. Br J Radiol 2023; 96:20220841. [PMID: 37129296 PMCID: PMC10230391 DOI: 10.1259/bjr.20220841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Revised: 03/01/2023] [Accepted: 03/20/2023] [Indexed: 05/03/2023] Open
Abstract
OBJECTIVE Accurate diagnosis and early treatment are crucial for survival in patients with brain metastases. This study aims to expand the capability of radiomics-based classification algorithms with novel features and compare results with deep learning-based algorithms to differentiate the subtypes of lung cancer from MRI of metastatic lesions in the brain. METHODS This study includes 75 small cell lung carcinoma, 72 squamous cell carcinoma, and 75 adenocarcinoma segments. For the radiomics-based algorithm, novel features from the original Laplacian of Gaussian filtered and two-dimensional wavelet transformed images were extracted, and a new three-stage feature selection algorithm was proposed for feature selection. Two classification methods were applied to images to identify the subtypes of lung cancer. Additionally, EfficientNet and ResNet with transfer learning were used as classifiers to compare the results of the proposed algorithm. RESULTS The sensitivity and specificity values of the radiomics-based classifier are 94.44 and 95.33%, and for the second classifier are 87.67% and 92.62%, respectively. Besides, a one-vs-all approach comparison was made utilizing two deep learning-based classifiers; The sensitivity and specificity values of 94.29 and 94.08% were obtained from ResNet-50. Moreover, mentioned metrics for EfficientNet-b0 are 92.86 and 93.42%. Furthermore, the accuracies of two radiomics-based and two deep learning-based models were 84.68%, 78.37%, 92.34%, and 90.99%, respectively for one-vs-one approach. CONCLUSION The results suggest that the proposed radiomics-based algorithm is a helpful diagnostic assistant to improve decision-making for treating patients with brain metastases in small datasets. ADVANCES IN KNOWLEDGE Firstly, the proposed method of this study extracts novel features from transformations of the original images, such as wavelet and Laplacian of Gaussian filter for the first time in literature. Secondly, this is the first study that investigates the classification performance of the shallow and deep learning approaches to identify subtypes of lung cancer.
Collapse
Affiliation(s)
- Gökalp Tulum
- Department of Mechatronics Engineering, Engineering and Architecture Faculty, Nisantasi University, Istanbul, Turkey
| |
Collapse
|
20
|
Wang H, Qu T, Bernstein K, Barbee D, Kondziolka D. Automatic segmentation of vestibular schwannomas from T1-weighted MRI with a deep neural network. Radiat Oncol 2023; 18:78. [PMID: 37158968 PMCID: PMC10169364 DOI: 10.1186/s13014-023-02263-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Accepted: 04/12/2023] [Indexed: 05/10/2023] Open
Abstract
BACKGROUND Long-term follow-up using volumetric measurement could significantly assist in the management of vestibular schwannomas (VS). Manual segmentation of VS from MRI for treatment planning and follow-up assessment is labor-intensive and time-consuming. This study aims to develop a deep learning technique to fully automatically segment VS from MRI. METHODS This study retrospectively analyzed MRI data of 737 patients who received gamma knife radiosurgery for VS. Treatment planning T1-weighted isotropic MR and manually contoured gross tumor volumes (GTV) were used for model development. A 3D convolutional neural network (CNN) was built on ResNet blocks. Spatial attenuation and deep supervision modules were integrated in each decoder level to enhance the training for the small tumor volume on brain MRI. The model was trained and tested on 587 and 150 patient data, respectively, from this institution (n = 495) and a publicly available dataset (n = 242). The model performance were assessed by the Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), average symmetric surface (ASSD) and relative absolute volume difference (RAVD) of the model segmentation results against the GTVs. RESULTS Measured on combined testing data from two institutions, the proposed method achieved mean DSC of 0.91 ± 0.08, ASSD of 0.3 ± 0.4 mm, HD95 of 1.3 ± 1.6 mm, and RAVD of 0.09 ± 0.15. The DSCs were 0.91 ± 0.09 and 0.92 ± 0.06 on 100 testing patients of this institution and 50 of the public data, respectively. CONCLUSIONS A CNN model was developed for fully automated segmentation of VS on T1-Weighted isotropic MRI. The model achieved good performance compared with physician clinical delineations on a sizeable dataset from two institutions. The proposed method potentially facilitates clinical workflow of radiosurgery for VS patient management.
Collapse
Affiliation(s)
- Hesheng Wang
- Department of Radiation Oncology, NYU Grossman School of Medicine, New York, NY, 10016, USA.
| | - Tanxia Qu
- Department of Radiation Oncology, NYU Grossman School of Medicine, New York, NY, 10016, USA
| | - Kenneth Bernstein
- Department of Radiation Oncology, NYU Grossman School of Medicine, New York, NY, 10016, USA
| | - David Barbee
- Department of Radiation Oncology, NYU Grossman School of Medicine, New York, NY, 10016, USA
| | - Douglas Kondziolka
- Department of Radiation Oncology, NYU Grossman School of Medicine, New York, NY, 10016, USA
- Department of Neurosurgery, NYU Grossman School of Medicine, New York, NY, 10016, USA
| |
Collapse
|
21
|
Ocaña-Tienda B, Pérez-Beteta J, Villanueva-García JD, Romero-Rosales JA, Molina-García D, Suter Y, Asenjo B, Albillo D, Ortiz de Mendivil A, Pérez-Romasanta LA, González-Del Portillo E, Llorente M, Carballo N, Nagib-Raya F, Vidal-Denis M, Luque B, Reyes M, Arana E, Pérez-García VM. A comprehensive dataset of annotated brain metastasis MR images with clinical and radiomic data. Sci Data 2023; 10:208. [PMID: 37059722 PMCID: PMC10104872 DOI: 10.1038/s41597-023-02123-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 03/30/2023] [Indexed: 04/16/2023] Open
Abstract
Brain metastasis (BM) is one of the main complications of many cancers, and the most frequent malignancy of the central nervous system. Imaging studies of BMs are routinely used for diagnosis of disease, treatment planning and follow-up. Artificial Intelligence (AI) has great potential to provide automated tools to assist in the management of disease. However, AI methods require large datasets for training and validation, and to date there have been just one publicly available imaging dataset of 156 BMs. This paper publishes 637 high-resolution imaging studies of 75 patients harboring 260 BM lesions, and their respective clinical data. It also includes semi-automatic segmentations of 593 BMs, including pre- and post-treatment T1-weighted cases, and a set of morphological and radiomic features for the cases segmented. This data-sharing initiative is expected to enable research into and performance evaluation of automatic BM detection, lesion segmentation, disease status evaluation and treatment planning methods for BMs, as well as the development and validation of predictive and prognostic tools with clinical applicability.
Collapse
Affiliation(s)
- Beatriz Ocaña-Tienda
- Mathematical Oncology Laboratory (MOLAB), University of Castilla-La Mancha, Ciudad Real, Spain.
| | - Julián Pérez-Beteta
- Mathematical Oncology Laboratory (MOLAB), University of Castilla-La Mancha, Ciudad Real, Spain
| | | | - José A Romero-Rosales
- Mathematical Oncology Laboratory (MOLAB), University of Castilla-La Mancha, Ciudad Real, Spain
| | - David Molina-García
- Mathematical Oncology Laboratory (MOLAB), University of Castilla-La Mancha, Ciudad Real, Spain
| | - Yannick Suter
- Medical Image Analysis Group, ARTORG Research Center, Bern, Switzerland
| | - Beatriz Asenjo
- Radiology Department, Hospital Regional Universitario de Málaga, Málaga, Spain
| | - David Albillo
- Radiology Department, MD Anderson Cancer Center, Madrid, Spain
| | | | | | | | - Manuel Llorente
- Radiology Department, MD Anderson Cancer Center, Madrid, Spain
| | | | - Fátima Nagib-Raya
- Radiology Department, Hospital Regional Universitario de Málaga, Málaga, Spain
| | - Maria Vidal-Denis
- Radiology Department, Hospital Regional Universitario de Málaga, Málaga, Spain
| | - Belén Luque
- Mathematical Oncology Laboratory (MOLAB), University of Castilla-La Mancha, Ciudad Real, Spain
| | - Mauricio Reyes
- Medical Image Analysis Group, ARTORG Research Center, Bern, Switzerland
| | - Estanislao Arana
- Radiology Department, Fundación Instituto Valenciano de Oncología, Valencia, Spain.
| | - Víctor M Pérez-García
- Mathematical Oncology Laboratory (MOLAB), University of Castilla-La Mancha, Ciudad Real, Spain
| |
Collapse
|
22
|
Dikici E, Nguyen XV, Takacs N, Prevedello LM. Prediction of model generalizability for unseen data: Methodology and case study in brain metastases detection in T1-Weighted contrast-enhanced 3D MRI. Comput Biol Med 2023; 159:106901. [PMID: 37068317 DOI: 10.1016/j.compbiomed.2023.106901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 03/08/2023] [Accepted: 04/09/2023] [Indexed: 04/19/2023]
Abstract
BACKGROUND AND PURPOSE A medical AI system's generalizability describes the continuity of its performance acquired from varying geographic, historical, and methodologic settings. Previous literature on this topic has mostly focused on "how" to achieve high generalizability (e.g., via larger datasets, transfer learning, data augmentation, model regularization schemes), with limited success. Instead, we aim to understand "when" the generalizability is achieved: Our study presents a medical AI system that could estimate its generalizability status for unseen data on-the-fly. MATERIALS AND METHODS We introduce a latent space mapping (LSM) approach utilizing Fréchet distance loss to force the underlying training data distribution into a multivariate normal distribution. During the deployment, a given test data's LSM distribution is processed to detect its deviation from the forced distribution; hence, the AI system could predict its generalizability status for any previously unseen data set. If low model generalizability is detected, then the user is informed by a warning message integrated into a sample deployment workflow. While the approach is applicable for most classification deep neural networks (DNNs), we demonstrate its application to a brain metastases (BM) detector for T1-weighted contrast-enhanced (T1c) 3D MRI. The BM detection model was trained using 175 T1c studies acquired internally (from the authors' institution) and tested using (1) 42 internally acquired exams and (2) 72 externally acquired exams from the publicly distributed Brain Mets dataset provided by the Stanford University School of Medicine. Generalizability scores, false positive (FP) rates, and sensitivities of the BM detector were computed for the test datasets. RESULTS AND CONCLUSION The model predicted its generalizability to be low for 31% of the testing data (i.e., two of the internally and 33 of the externally acquired exams), where it produced (1) ∼13.5 false positives (FPs) at 76.1% BM detection sensitivity for the low and (2) ∼10.5 FPs at 89.2% BM detection sensitivity for the high generalizability groups respectively. These results suggest that the proposed formulation enables a model to predict its generalizability for unseen data.
Collapse
Affiliation(s)
- Engin Dikici
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA.
| | - Xuan V Nguyen
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA
| | - Noah Takacs
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA
| | - Luciano M Prevedello
- The Ohio State University, College of Medicine, Department of Radiology, Columbus, OH, 43210, USA
| |
Collapse
|
23
|
Wang JY, Qu V, Hui C, Sandhu N, Mendoza MG, Panjwani N, Chang YC, Liang CH, Lu JT, Wang L, Kovalchuk N, Gensheimer MF, Soltys SG, Pollom EL. Stratified assessment of an FDA-cleared deep learning algorithm for automated detection and contouring of metastatic brain tumors in stereotactic radiosurgery. Radiat Oncol 2023; 18:61. [PMID: 37016416 PMCID: PMC10074777 DOI: 10.1186/s13014-023-02246-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 03/14/2023] [Indexed: 04/06/2023] Open
Abstract
PURPOSE Artificial intelligence-based tools can be leveraged to improve detection and segmentation of brain metastases for stereotactic radiosurgery (SRS). VBrain by Vysioneer Inc. is a deep learning algorithm with recent FDA clearance to assist in brain tumor contouring. We aimed to assess the performance of this tool by various demographic and clinical characteristics among patients with brain metastases treated with SRS. MATERIALS AND METHODS We randomly selected 100 patients with brain metastases who underwent initial SRS on the CyberKnife from 2017 to 2020 at a single institution. Cases with resection cavities were excluded from the analysis. Computed tomography (CT) and axial T1-weighted post-contrast magnetic resonance (MR) image data were extracted for each patient and uploaded to VBrain. A brain metastasis was considered "detected" when the VBrain- "predicted" contours overlapped with the corresponding physician contours ("ground-truth" contours). We evaluated performance of VBrain against ground-truth contours using the following metrics: lesion-wise Dice similarity coefficient (DSC), lesion-wise average Hausdorff distance (AVD), false positive count (FP), and lesion-wise sensitivity (%). Kruskal-Wallis tests were performed to assess the relationships between patient characteristics including sex, race, primary histology, age, and size and number of brain metastases, and performance metrics such as DSC, AVD, FP, and sensitivity. RESULTS We analyzed 100 patients with 435 intact brain metastases treated with SRS. Our cohort consisted of patients with a median number of 2 brain metastases (range: 1 to 52), median age of 69 (range: 19 to 91), and 50% male and 50% female patients. The primary site breakdown was 56% lung, 10% melanoma, 9% breast, 8% gynecological, 5% renal, 4% gastrointestinal, 2% sarcoma, and 6% other, while the race breakdown was 60% White, 18% Asian, 3% Black/African American, 2% Native Hawaiian or other Pacific Islander, and 17% other/unknown/not reported. The median tumor size was 0.112 c.c. (range: 0.010-26.475 c.c.). We found mean lesion-wise DSC to be 0.723, mean lesion-wise AVD to be 7.34% of lesion size (0.704 mm), mean FP count to be 0.72 tumors per case, and lesion-wise sensitivity to be 89.30% for all lesions. Moreover, mean sensitivity was found to be 99.07%, 97.59%, and 96.23% for lesions with diameter equal to and greater than 10 mm, 7.5 mm, and 5 mm, respectively. No other significant differences in performance metrics were observed across demographic or clinical characteristic groups. CONCLUSION In this study, a commercial deep learning algorithm showed promising results in segmenting brain metastases, with 96.23% sensitivity for metastases with diameters of 5 mm or higher. As the software is an assistive AI, future work of VBrain integration into the clinical workflow can provide further clinical and research insights.
Collapse
Affiliation(s)
- Jen-Yeu Wang
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Vera Qu
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Caressa Hui
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Navjot Sandhu
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Maria G Mendoza
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Neil Panjwani
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | | | | | | | - Lei Wang
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Nataliya Kovalchuk
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Michael F Gensheimer
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Scott G Soltys
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Erqi L Pollom
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA.
| |
Collapse
|
24
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RL, Liu T, Wang T, Yang X. Deep Learning in MRI-guided Radiation Therapy: A Systematic Review. ARXIV 2023:arXiv:2303.11378v2. [PMID: 36994167 PMCID: PMC10055493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
MRI-guided radiation therapy (MRgRT) offers a precise and adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed. MRI-guided radiation therapy offers a precise, adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed with emphasis placed on underlying methods. Studies are further categorized into the areas of segmentation, synthesis, radiomics, and real time MRI. Finally, clinical implications, current challenges, and future directions are discussed.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| |
Collapse
|
25
|
Luo X, Yang Y, Yin S, Li H, Zhang W, Xu G, Fan W, Zheng D, Li J, Shen D, Gao Y, Shao Y, Ban X, Li J, Lian S, Zhang C, Ma L, Lin C, Luo Y, Zhou F, Wang S, Sun Y, Zhang R, Xie C. False-negative and false-positive outcomes of computer-aided detection on brain metastasis: Secondary analysis of a multicenter, multireader study. Neuro Oncol 2023; 25:544-556. [PMID: 35943350 PMCID: PMC10013637 DOI: 10.1093/neuonc/noac192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Indexed: 11/14/2022] Open
Abstract
BACKGROUND Errors have seldom been evaluated in computer-aided detection on brain metastases. This study aimed to analyze false negatives (FNs) and false positives (FPs) generated by a brain metastasis detection system (BMDS) and by readers. METHODS A deep learning-based BMDS was developed and prospectively validated in a multicenter, multireader study. Ad hoc secondary analysis was restricted to the prospective participants (148 with 1,066 brain metastases and 152 normal controls). Three trainees and 3 experienced radiologists read the MRI images without and with the BMDS. The number of FNs and FPs per patient, jackknife alternative free-response receiver operating characteristic figure of merit (FOM), and lesion features associated with FNs were analyzed for the BMDS and readers using binary logistic regression. RESULTS The FNs, FPs, and the FOM of the stand-alone BMDS were 0.49, 0.38, and 0.97, respectively. Compared with independent reading, BMDS-assisted reading generated 79% fewer FNs (1.98 vs 0.42, P < .001); 41% more FPs (0.17 vs 0.24, P < .001) but 125% more FPs for trainees (P < .001); and higher FOM (0.87 vs 0.98, P < .001). Lesions with small size, greater number, irregular shape, lower signal intensity, and located on nonbrain surface were associated with FNs for readers. Small, irregular, and necrotic lesions were more frequently found in FNs for BMDS. The FPs mainly resulted from small blood vessels for the BMDS and the readers. CONCLUSIONS Despite the improvement in detection performance, attention should be paid to FPs and small lesions with lower enhancement for radiologists, especially for less-experienced radiologists.
Collapse
Affiliation(s)
- Xiao Luo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yadi Yang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shaohan Yin
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Hui Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Weijing Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Guixiao Xu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Weixiong Fan
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Dechun Zheng
- Department of Radiology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, Fujian Province, China
| | - Jianpeng Li
- Department of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Guangzhou, China
| | - Dinggang Shen
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.,School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yaozong Gao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Shao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.,Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Xiaohua Ban
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Jing Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shanshan Lian
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Cheng Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Lidi Ma
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Cuiping Lin
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yingwei Luo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Fan Zhou
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shiyuan Wang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Ying Sun
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Rong Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Chuanmiao Xie
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| |
Collapse
|
26
|
A Deep Learning-Based Computer Aided Detection (CAD) System for Difficult-to-Detect Brain Metastases. Int J Radiat Oncol Biol Phys 2023; 115:779-793. [PMID: 36289038 DOI: 10.1016/j.ijrobp.2022.09.068] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 08/09/2022] [Accepted: 09/07/2022] [Indexed: 01/19/2023]
Abstract
PURPOSE We sought to develop a computer-aided detection (CAD) system that optimally augments human performance, excelling especially at identifying small inconspicuous brain metastases (BMs), by training a convolutional neural network on a unique magnetic resonance imaging (MRI) data set containing subtle BMs that were not detected prospectively during routine clinical care. METHODS AND MATERIALS Patients receiving stereotactic radiosurgery (SRS) for BMs at our institution from 2016 to 2018 without prior brain-directed therapy or small cell histology were eligible. For patients who underwent 2 consecutive courses of SRS, treatment planning MRIs from their initial course were reviewed for radiographic evidence of an emerging metastasis at the same location as metastases treated in their second SRS course. If present, these previously unidentified lesions were contoured and categorized as retrospectively identified metastases (RIMs). RIMs were further subcategorized according to whether they did (+DC) or did not (-DC) meet diagnostic imaging-based criteria to definitively classify them as metastases based upon their appearance in the initial MRI alone. Prospectively identified metastases (PIMs) from these patients, and from patients who only underwent a single course of SRS, were also included. An open-source convolutional neural network architecture was adapted and trained to detect both RIMs and PIMs on thin-slice, contrast-enhanced, spoiled gradient echo MRIs. Patients were randomized into 5 groups: 4 for training/cross-validation and 1 for testing. RESULTS One hundred thirty-five patients with 563 metastases, including 72 RIMS, met criteria. For the test group, CAD sensitivity was 94% for PIMs, 80% for +DC RIMs, and 79% for PIMs and +DC RIMs with diameter <3 mm, with a median of 2 false positives per patient and a Dice coefficient of 0.79. CONCLUSIONS Our CAD model, trained on a novel data set and using a single common MR sequence, demonstrated high sensitivity and specificity overall, outperforming published CAD results for small metastases and RIMs - the lesion types most in need of human performance augmentation.
Collapse
|
27
|
Avesta A, Hossain S, Lin M, Aboian M, Krumholz HM, Aneja S. Comparing 3D, 2.5D, and 2D Approaches to Brain Image Auto-Segmentation. Bioengineering (Basel) 2023; 10:bioengineering10020181. [PMID: 36829675 PMCID: PMC9952534 DOI: 10.3390/bioengineering10020181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 01/09/2023] [Accepted: 01/09/2023] [Indexed: 02/04/2023] Open
Abstract
Deep-learning methods for auto-segmenting brain images either segment one slice of the image (2D), five consecutive slices of the image (2.5D), or an entire volume of the image (3D). Whether one approach is superior for auto-segmenting brain images is not known. We compared these three approaches (3D, 2.5D, and 2D) across three auto-segmentation models (capsule networks, UNets, and nnUNets) to segment brain structures. We used 3430 brain MRIs, acquired in a multi-institutional study, to train and test our models. We used the following performance metrics: segmentation accuracy, performance with limited training data, required computational memory, and computational speed during training and deployment. The 3D, 2.5D, and 2D approaches respectively gave the highest to lowest Dice scores across all models. 3D models maintained higher Dice scores when the training set size was decreased from 3199 MRIs down to 60 MRIs. 3D models converged 20% to 40% faster during training and were 30% to 50% faster during deployment. However, 3D models require 20 times more computational memory compared to 2.5D or 2D models. This study showed that 3D models are more accurate, maintain better performance with limited training data, and are faster to train and deploy. However, 3D models require more computational memory compared to 2.5D or 2D models.
Collapse
Affiliation(s)
- Arman Avesta
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT 06510, USA
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
| | - Sajid Hossain
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
| | - MingDe Lin
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT 06510, USA
- Visage Imaging, Inc., San Diego, CA 92130, USA
| | - Mariam Aboian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT 06510, USA
| | - Harlan M. Krumholz
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
- Division of Cardiovascular Medicine, Yale School of Medicine, New Haven, CT 06510, USA
| | - Sanjay Aneja
- Department of Therapeutic Radiology, Yale School of Medicine, New Haven, CT 06510, USA
- Center for Outcomes Research and Evaluation, Yale School of Medicine, New Haven, CT 06510, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT 06510, USA
- Correspondence: ; Tel.: +1-203-200-2100; Fax: +1-203-737-1467
| |
Collapse
|
28
|
Application of artificial intelligence to stereotactic radiosurgery for intracranial lesions: detection, segmentation, and outcome prediction. J Neurooncol 2023; 161:441-450. [PMID: 36635582 DOI: 10.1007/s11060-022-04234-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 12/30/2022] [Indexed: 01/14/2023]
Abstract
BACKGROUND Rapid evolution of artificial intelligence (AI) prompted its wide application in healthcare systems. Stereotactic radiosurgery served as a good candidate for AI model development and achieved encouraging result in recent years. This article aimed at demonstrating current AI application in radiosurgery. METHODS Literatures published in PubMed during 2010-2022, discussing AI application in stereotactic radiosurgery were reviewed. RESULTS AI algorithms, especially machine learning/deep learning models, have been administered to different aspect of stereotactic radiosurgery. Spontaneous tumor detection and automated lesion delineation or segmentation were two of the promising application, which could be further extended to longitudinal treatment follow-up. Outcome prediction utilized machine learning algorithms with radiomic-based analysis was another well-established application. CONCLUSIONS Stereotactic radiosurgery has taken a lead role in AI development. Current achievement, limitation, and further investigation was summarized in this article.
Collapse
|
29
|
Li R, Guo Y, Zhao Z, Chen M, Liu X, Gong G, Wang L. MRI-based two-stage deep learning model for automatic detection and segmentation of brain metastases. Eur Radiol 2023; 33:3521-3531. [PMID: 36695903 DOI: 10.1007/s00330-023-09420-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 12/12/2022] [Accepted: 12/29/2022] [Indexed: 01/26/2023]
Abstract
OBJECTIVES To develop and validate a two-stage deep learning model for automatic detection and segmentation of brain metastases (BMs) in MRI images. METHODS In this retrospective study, T1-weighted (T1) and T1-weighted contrast-enhanced (T1ce) MRI images of 649 patients who underwent radiotherapy from August 2019 to January 2022 were included. A total of 5163 metastases were manually annotated by neuroradiologists. A two-stage deep learning model was developed for automatic detection and segmentation of BMs, which consisted of a lightweight segmentation network for generating metastases proposals and a multi-scale classification network for false-positive suppression. Its performance was evaluated by sensitivity, precision, F1-score, dice, and relative volume difference (RVD). RESULTS Six hundred forty-nine patients were randomly divided into training (n = 295), validation (n = 99), and testing (n = 255) sets. The proposed two-stage model achieved a sensitivity of 90% (1463/1632) and a precision of 56% (1463/2629) on the testing set, outperforming one-stage methods based on a single-shot detector, 3D U-Net, and nnU-Net, whose sensitivities were 78% (1276/1632), 79% (1290/1632), and 87% (1426/1632), and the precisions were 40% (1276/3222), 51% (1290/2507), and 53% (1426/2688), respectively. Particularly for BMs smaller than 5 mm, the proposed model achieved a sensitivity of 66% (116/177), far superior to one-stage models (21% (37/177), 36% (64/177), and 53% (93/177)). Furthermore, it also achieved high segmentation performance with an average dice of 81% and an average RVD of 20%. CONCLUSION A two-stage deep learning model can detect and segment BMs with high sensitivity and low volume error. KEY POINTS • A two-stage deep learning model based on triple-channel MRI images identified brain metastases with 90% sensitivity and 56% precision. • For brain metastases smaller than 5 mm, the proposed two-stage model achieved 66% sensitivity and 22% precision. • For segmentation of brain metastases, the proposed two-stage model achieved a dice of 81% and a relative volume difference (RVD) of 20%.
Collapse
Affiliation(s)
- Ruikun Li
- Department of Automation, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yujie Guo
- Shandong Cancer Hospital Affiliated to Shandong University, Jinan, 250117, China
| | - Zhongchen Zhao
- Department of Automation, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Mingming Chen
- Shandong Cancer Hospital Affiliated to Shandong University, Jinan, 250117, China
| | | | - Guanzhong Gong
- Shandong Cancer Hospital Affiliated to Shandong University, Jinan, 250117, China. .,Department of Engineering Physics, Tsinghua University, Beijing, 100084, China.
| | - Lisheng Wang
- Department of Automation, School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
30
|
Ottesen JA, Yi D, Tong E, Iv M, Latysheva A, Saxhaug C, Jacobsen KD, Helland Å, Emblem KE, Rubin DL, Bjørnerud A, Zaharchuk G, Grøvik E. 2.5D and 3D segmentation of brain metastases with deep learning on multinational MRI data. Front Neuroinform 2023; 16:1056068. [PMID: 36743439 PMCID: PMC9889663 DOI: 10.3389/fninf.2022.1056068] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 12/26/2022] [Indexed: 01/20/2023] Open
Abstract
Introduction Management of patients with brain metastases is often based on manual lesion detection and segmentation by an expert reader. This is a time- and labor-intensive process, and to that end, this work proposes an end-to-end deep learning segmentation network for a varying number of available MRI available sequences. Methods We adapt and evaluate a 2.5D and a 3D convolution neural network trained and tested on a retrospective multinational study from two independent centers, in addition, nnU-Net was adapted as a comparative benchmark. Segmentation and detection performance was evaluated by: (1) the dice similarity coefficient, (2) a per-metastases and the average detection sensitivity, and (3) the number of false positives. Results The 2.5D and 3D models achieved similar results, albeit the 2.5D model had better detection rate, whereas the 3D model had fewer false positive predictions, and nnU-Net had fewest false positives, but with the lowest detection rate. On MRI data from center 1, the 2.5D, 3D, and nnU-Net detected 79%, 71%, and 65% of all metastases; had an average per patient sensitivity of 0.88, 0.84, and 0.76; and had on average 6.2, 3.2, and 1.7 false positive predictions per patient, respectively. For center 2, the 2.5D, 3D, and nnU-Net detected 88%, 86%, and 78% of all metastases; had an average per patient sensitivity of 0.92, 0.91, and 0.85; and had on average 1.0, 0.4, and 0.1 false positive predictions per patient, respectively. Discussion/Conclusion Our results show that deep learning can yield highly accurate segmentations of brain metastases with few false positives in multinational data, but the accuracy degrades for metastases with an area smaller than 0.4 cm2.
Collapse
Affiliation(s)
- Jon André Ottesen
- CRAI, Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway,Department of Physics, Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway,*Correspondence: Jon André Ottesen ✉
| | - Darvin Yi
- Department of Ophthalmology, University of Illinois, Chicago, IL, United States
| | - Elizabeth Tong
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Michael Iv
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Anna Latysheva
- Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | - Cathrine Saxhaug
- Division of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | | | - Åslaug Helland
- Department of Oncology, Oslo University Hospital, Oslo, Norway
| | - Kyrre Eeg Emblem
- Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway
| | - Daniel L. Rubin
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Atle Bjørnerud
- CRAI, Division of Radiology and Nuclear Medicine, Department of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway,Department of Physics, Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Endre Grøvik
- Department of Radiology, Ålesund Hospital, Møre og Romsdal Hospital Trust, Ålesund, Norway,Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
31
|
Deep Learning for Detecting Brain Metastases on MRI: A Systematic Review and Meta-Analysis. Cancers (Basel) 2023; 15:cancers15020334. [PMID: 36672286 PMCID: PMC9857123 DOI: 10.3390/cancers15020334] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 12/31/2022] [Accepted: 12/31/2022] [Indexed: 01/06/2023] Open
Abstract
Since manual detection of brain metastases (BMs) is time consuming, studies have been conducted to automate this process using deep learning. The purpose of this study was to conduct a systematic review and meta-analysis of the performance of deep learning models that use magnetic resonance imaging (MRI) to detect BMs in cancer patients. A systematic search of MEDLINE, EMBASE, and Web of Science was conducted until 30 September 2022. Inclusion criteria were: patients with BMs; deep learning using MRI images was applied to detect the BMs; sufficient data were present in terms of detective performance; original research articles. Exclusion criteria were: reviews, letters, guidelines, editorials, or errata; case reports or series with less than 20 patients; studies with overlapping cohorts; insufficient data in terms of detective performance; machine learning was used to detect BMs; articles not written in English. Quality Assessment of Diagnostic Accuracy Studies-2 and Checklist for Artificial Intelligence in Medical Imaging was used to assess the quality. Finally, 24 eligible studies were identified for the quantitative analysis. The pooled proportion of patient-wise and lesion-wise detectability was 89%. Articles should adhere to the checklists more strictly. Deep learning algorithms effectively detect BMs. Pooled analysis of false positive rates could not be estimated due to reporting differences.
Collapse
|
32
|
Buchner JA, Kofler F, Etzel L, Mayinger M, Christ SM, Brunner TB, Wittig A, Menze B, Zimmer C, Meyer B, Guckenberger M, Andratschke N, El Shafie RA, Debus J, Rogers S, Riesterer O, Schulze K, Feldmann HJ, Blanck O, Zamboglou C, Ferentinos K, Wolff R, Eitz KA, Combs SE, Bernhardt D, Wiestler B, Peeken JC. Development and external validation of an MRI-based neural network for brain metastasis segmentation in the AURORA multicenter study. Radiother Oncol 2023; 178:109425. [PMID: 36442609 DOI: 10.1016/j.radonc.2022.11.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 11/17/2022] [Accepted: 11/18/2022] [Indexed: 11/27/2022]
Abstract
BACKGROUND Stereotactic radiotherapy is a standard treatment option for patients with brain metastases. The planning target volume is based on gross tumor volume (GTV) segmentation. The aim of this work is to develop and validate a neural network for automatic GTV segmentation to accelerate clinical daily routine practice and minimize interobserver variability. METHODS We analyzed MRIs (T1-weighted sequence ± contrast-enhancement, T2-weighted sequence, and FLAIR sequence) from 348 patients with at least one brain metastasis from different cancer primaries treated in six centers. To generate reference segmentations, all GTVs and the FLAIR hyperintense edematous regions were segmented manually. A 3D-U-Net was trained on a cohort of 260 patients from two centers to segment the GTV and the surrounding FLAIR hyperintense region. During training varying degrees of data augmentation were applied. Model validation was performed using an independent international multicenter test cohort (n = 88) including four centers. RESULTS Our proposed U-Net reached a mean overall Dice similarity coefficient (DSC) of 0.92 ± 0.08 and a mean individual metastasis-wise DSC of 0.89 ± 0.11 in the external test cohort for GTV segmentation. Data augmentation improved the segmentation performance significantly. Detection of brain metastases was effective with a mean F1-Score of 0.93 ± 0.16. The model performance was stable independent of the center (p = 0.3). There was no correlation between metastasis volume and DSC (Pearson correlation coefficient 0.07). CONCLUSION Reliable automated segmentation of brain metastases with neural networks is possible and may support radiotherapy planning by providing more objective GTV definitions.
Collapse
Affiliation(s)
- Josef A Buchner
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany.
| | - Florian Kofler
- Department of Informatics, Technical University of Munich, Munich, Germany; Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany; Helmholtz AI, Helmholtz Zentrum Munich, Munich, Germany
| | - Lucas Etzel
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany
| | - Michael Mayinger
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Sebastian M Christ
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Thomas B Brunner
- Department of Radiation Oncology, University Hospital Magdeburg, Magdeburg, Germany
| | - Andrea Wittig
- Department of Radiotherapy and Radiation Oncology, University Hospital Jena, Friedrich-Schiller University, Jena, Germany
| | - Björn Menze
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Claus Zimmer
- Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Bernhard Meyer
- Department of Neurosurgery, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Matthias Guckenberger
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Nicolaus Andratschke
- Department of Radiation Oncology, University Hospital of Zurich, University of Zurich, Zurich, Switzerland
| | - Rami A El Shafie
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Oncology (NCRO), Heidelberg, Germany; Department of Radiation Oncology, University Medical Center Göttingen, Göttingen, Germany
| | - Jürgen Debus
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Oncology (NCRO), Heidelberg, Germany
| | - Susanne Rogers
- Radiation Oncology Center KSA-KSB, Kantonsspital Aarau, Aarau, Switzerland
| | - Oliver Riesterer
- Radiation Oncology Center KSA-KSB, Kantonsspital Aarau, Aarau, Switzerland
| | - Katrin Schulze
- Department of Radiation Oncology, General Hospital Fulda, Fulda, Germany
| | - Horst J Feldmann
- Department of Radiation Oncology, General Hospital Fulda, Fulda, Germany
| | - Oliver Blanck
- Department of Radiation Oncology, University Medical Center Schleswig Holstein, Kiel, Germany
| | - Constantinos Zamboglou
- Department of Radiation Oncology, University of Freiburg - Medical Center, Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Freiburg, Germany; Department of Radiation Oncology, German Oncology Center, European University of Cyprus, Limassol, Cyprus
| | - Konstantinos Ferentinos
- Department of Radiation Oncology, German Oncology Center, European University of Cyprus, Limassol, Cyprus
| | - Robert Wolff
- Saphir Radiosurgery Center Frankfurt and Northern Germany, Guestrow, Germany; Department of Neurosurgery, University Hospital Frankfurt, Frankfurt, Germany
| | - Kerstin A Eitz
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany; Institute of Radiation Medicine (IRM), Department of Radiation Sciences (DRS), Helmholtz Center Munich, Munich, Germany
| | - Stephanie E Combs
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany; Institute of Radiation Medicine (IRM), Department of Radiation Sciences (DRS), Helmholtz Center Munich, Munich, Germany
| | - Denise Bernhardt
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
| | - Jan C Peeken
- Department of Radiation Oncology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany; Deutsches Konsortium für Translationale Krebsforschung (DKTK), Partner Site Munich, Munich, Germany; Institute of Radiation Medicine (IRM), Department of Radiation Sciences (DRS), Helmholtz Center Munich, Munich, Germany
| |
Collapse
|
33
|
Liu X, Wang R, Zhu Z, Wang K, Gao Y, Li J, Zhang Y, Wang X, Zhang X, Wang X. Automatic segmentation of hepatic metastases on DWI images based on a deep learning method: assessment of tumor treatment response according to the RECIST 1.1 criteria. BMC Cancer 2022; 22:1285. [PMID: 36476181 PMCID: PMC9730687 DOI: 10.1186/s12885-022-10366-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 11/24/2022] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND Evaluation of treated tumors according to Response Evaluation Criteria in Solid Tumors (RECIST) criteria is an important but time-consuming task in medical imaging. Deep learning methods are expected to automate the evaluation process and improve the efficiency of imaging interpretation. OBJECTIVE To develop an automated algorithm for segmentation of liver metastases based on a deep learning method and assess its efficacy for treatment response assessment according to the RECIST 1.1 criteria. METHODS One hundred and sixteen treated patients with clinically confirmed liver metastases were enrolled. All patients had baseline and post-treatment MR images. They were divided into an initial (n = 86) and validation cohort (n = 30) according to the examined time. The metastatic foci on DWI images were annotated by two researchers in consensus. Then the treatment responses were assessed by the two researchers according to RECIST 1.1 criteria. A 3D U-Net algorithm was trained for automated liver metastases segmentation using the initial cohort. Based on the segmentation of liver metastases, the treatment response was assessed automatically with a rule-based program according to the RECIST 1.1 criteria. The segmentation performance was evaluated using the Dice similarity coefficient (DSC), volumetric similarity (VS), and Hausdorff distance (HD). The area under the curve (AUC) and Kappa statistics were used to assess the accuracy and consistency of the treatment response assessment by the deep learning model and compared with two radiologists [attending radiologist (R1) and fellow radiologist (R2)] in the validation cohort. RESULTS In the validation cohort, the mean DSC, VS, and HD were 0.85 ± 0.08, 0.89 ± 0.09, and 25.53 ± 12.11 mm for the liver metastases segmentation. The accuracies of R1, R2 and automated segmentation-based assessment were 0.77, 0.65, and 0.74, respectively, and the AUC values were 0.81, 0.73, and 0.83, respectively. The consistency of treatment response assessment based on automated segmentation and manual annotation was moderate [K value: 0.60 (0.34-0.84)]. CONCLUSION The deep learning-based liver metastases segmentation was capable of evaluating treatment response according to RECIST 1.1 criteria, with comparable results to the junior radiologist and superior to that of the fellow radiologist.
Collapse
Affiliation(s)
- Xiang Liu
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Rui Wang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Zemin Zhu
- Department of Hepatobiliary and Pancreatic Surgery, Zhuzhou Central Hospital, Zhuzhou, 412000, China
| | - Kexin Wang
- School of Basic Medical Sciences, Capital Medical University, Beijing, 100069, China
| | - Yue Gao
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Jialun Li
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, 100011, China
| | - Yaofeng Zhang
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, 100011, China
| | - Xiangpeng Wang
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, 100011, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China.
| |
Collapse
|
34
|
Liang Y, Lee K, Bovi JA, Palmer JD, Brown PD, Gondi V, Tomé WA, Benzinger TLS, Mehta MP, Li XA. Deep Learning-Based Automatic Detection of Brain Metastases in Heterogenous Multi-Institutional Magnetic Resonance Imaging Sets: An Exploratory Analysis of NRG-CC001. Int J Radiat Oncol Biol Phys 2022; 114:529-536. [PMID: 35787927 PMCID: PMC9641965 DOI: 10.1016/j.ijrobp.2022.06.081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 06/09/2022] [Accepted: 06/21/2022] [Indexed: 10/31/2022]
Abstract
PURPOSE Deep learning-based algorithms have been shown to be able to automatically detect and segment brain metastases (BMs) in magnetic resonance imaging, mostly based on single-institutional data sets. This work aimed to investigate the use of deep convolutional neural networks (DCNN) for BM detection and segmentation on a highly heterogeneous multi-institutional data set. METHODS AND MATERIALS A total of 407 patients from 98 institutions were randomly split into 326 patients from 78 institutions for training/validation and 81 patients from 20 institutions for unbiased testing. The data set contained T1-weighted gadolinium and T2-weighted fluid-attenuated inversion recovery magnetic resonance imaging acquired on diverse scanners using different pulse sequences and various acquisition parameters. Several variants of 3-dimensional U-Net based DCNN models were trained and tuned using 5-fold cross validation on the training set. Performances of different models were compared based on Dice similarity coefficient for segmentation and sensitivity and false positive rate (FPR) for detection. The best performing model was evaluated on the test set. RESULTS A DCNN with an input size of 64 × 64 × 64 and an equal number of 128 kernels for all convolutional layers using instance normalization was identified as the best performing model (Dice similarity coefficient 0.73, sensitivity 0.86, and FPR 1.9) in the 5-fold cross validation experiments. The best performing model demonstrated consistent behavior on the test set (Dice similarity coefficient 0.73, sensitivity 0.91, and FPR 1.7) and successfully detected 7 BMs (out of 327) that were missed during manual delineation. For large BMs with diameters greater than 12 mm, the sensitivity and FPR improved to 0.98 and 0.3, respectively. CONCLUSIONS The DCNN model developed can automatically detect and segment brain metastases with reasonable accuracy, high sensitivity, and low FPR on a multi-institutional data set with nonprespecified and highly variable magnetic resonance imaging sequences. For large BMs, the model achieved clinically relevant results. The model is robust and may be potentially used in real-world situations.
Collapse
Affiliation(s)
- Ying Liang
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Karen Lee
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Joseph A Bovi
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin
| | - Joshua D Palmer
- Department of Radiation Oncology, The James Cancer Hospital and Solove Research Institute at the Ohio State University, Columbus, Ohio
| | - Paul D Brown
- Department of Radiation Oncology, Mayo Clinic, Rochester, Minnesota
| | - Vinai Gondi
- Department of Radiation Oncology, Northwestern Medicine Cancer Center and Proton Center, Warrenville, Illinois
| | - Wolfgang A Tomé
- Department of Radiation Oncology, Montefiore Medical Center and Albert Einstein College of Medicine, Bronx, New York
| | - Tammie L S Benzinger
- Department of Radiology, Washington University School of Medicine, St Louis, Missouri
| | | | - X Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin.
| |
Collapse
|
35
|
Advancing Brain Metastases Detection in T1-Weighted Contrast-Enhanced 3D MRI Using Noisy Student-Based Training. Diagnostics (Basel) 2022; 12:diagnostics12082023. [PMID: 36010373 PMCID: PMC9407228 DOI: 10.3390/diagnostics12082023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2022] [Revised: 08/17/2022] [Accepted: 08/19/2022] [Indexed: 11/17/2022] Open
Abstract
The detection of brain metastases (BM) in their early stages could have a positive impact on the outcome of cancer patients. The authors previously developed a framework for detecting small BM (with diameters of <15 mm) in T1-weighted contrast-enhanced 3D magnetic resonance images (T1c). This study aimed to advance the framework with a noisy-student-based self-training strategy to use a large corpus of unlabeled T1c data. Accordingly, a sensitivity-based noisy-student learning approach was formulated to provide high BM detection sensitivity with a reduced count of false positives. This paper (1) proposes student/teacher convolutional neural network architectures, (2) presents data and model noising mechanisms, and (3) introduces a novel pseudo-labeling strategy factoring in the sensitivity constraint. The evaluation was performed using 217 labeled and 1247 unlabeled exams via two-fold cross-validation. The framework utilizing only the labeled exams produced 9.23 false positives for 90% BM detection sensitivity, whereas the one using the introduced learning strategy led to ~9% reduction in false detections (i.e., 8.44). Significant reductions in false positives (>10%) were also observed in reduced labeled data scenarios (using 50% and 75% of labeled data). The results suggest that the introduced strategy could be utilized in existing medical detection applications with access to unlabeled datasets to elevate their performances.
Collapse
|
36
|
Huang Y, Bert C, Sommer P, Frey B, Gaipl U, Distel LV, Weissmann T, Uder M, Schmidt MA, Dörfler A, Maier A, Fietkau R, Putz F. Deep learning for brain metastasis detection and segmentation in longitudinal MRI data. Med Phys 2022; 49:5773-5786. [PMID: 35833351 DOI: 10.1002/mp.15863] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 06/22/2022] [Accepted: 06/28/2022] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Brain metastases occur frequently in patients with metastatic cancer. Early and accurate detection of brain metastases is essential for treatment planning and prognosis in radiation therapy. Due to their tiny sizes and relatively low contrast, small brain metastases are very difficult to detect manually. With the recent development of deep learning technologies, several researchers have reported promising results in automated brain metastasis detection. However, the detection sensitivity is still not high enough for tiny brain metastases, and integration into clinical practice in regard to differentiating true metastases from false positives is challenging. METHODS The DeepMedic network with the binary cross-entropy (BCE) loss is used as our baseline method. To improve brain metastasis detection performance, a custom detection loss called volume-level sensitivity-specificity (VSS) is proposed, which rates metastasis detection sensitivity and specificity at a (sub-)volume level. As sensitivity and precision are always a trade-off, either a high sensitivity or a high precision can be achieved for brain metastasis detection by adjusting the weights in the VSS loss without decline in dice score coefficient for segmented metastases. To reduce metastasis-like structures being detected as false positive metastases, a temporal prior volume is proposed as an additional input of DeepMedic. The modified network is called DeepMedic+ for distinction. Combining a high sensitivity VSS loss and a high specificity loss for DeepMedic+, the majority of true positive metastases are confirmed with high specificity, while additional metastases candidates in each patient are marked with high sensitivity for detailed expert evaluation. RESULTS Our proposed VSS loss improves the sensitivity of brain metastasis detection, increasing the sensitivity from 85.3% for DeepMedic with BCE to 97.5% for DeepMedic with VSS. Alternatively, the precision is improved from 69.1% for DeepMedic with BCE to 98.7% for DeepMedic with VSS. Comparing DeepMedic+ with DeepMedic with the same VSS loss, 44.4% of the false positive metastases are reduced in the high sensitivity model and the precision reaches 99.6% for the high specificity model. The mean dice coefficient for all metastases is about 0.81. With the ensemble of the high sensitivity and high specificity models, on average only 1.5 false positive metastases per patient need further check, while the majority of true positive metastases are confirmed. CONCLUSIONS Our proposed VSS loss and temporal prior improve brain metastasis detection sensitivity and precision. The ensemble learning is able to distinguish high confidence true positive metastases from metastases candidates that require special expert review or further follow-up, being particularly well-fit to the requirements of expert support in real clinical practice. This facilitates metastasis detection and segmentation for neuroradiologists in diagnostic and radiation oncologists in therapeutic clinical applications. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Yixing Huang
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Christoph Bert
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Philipp Sommer
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Benjamin Frey
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Udo Gaipl
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Luitpold V Distel
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Thomas Weissmann
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Michael Uder
- Institute of Radiology, Universitätsklinikum Erlangen, FAU, Erlangen, Germany
| | - Manuel A Schmidt
- Department of Neuroradiology, Universitätsklinikum Erlangen, FAU, Erlangen, Germany
| | - Arnd Dörfler
- Department of Neuroradiology, Universitätsklinikum Erlangen, FAU, Erlangen, Germany
| | | | - Rainer Fietkau
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| | - Florian Putz
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.,Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
| |
Collapse
|
37
|
Marshall C, Thirion P, Mihai A, Armstrong JG, Cournane S, Hickey D, McClean B, Quinn J. Interobserver variability of Gross Tumour Volume delineation for colorectal liver metastases using CT and MRI. Adv Radiat Oncol 2022; 8:101020. [PMID: 36176355 PMCID: PMC9513217 DOI: 10.1016/j.adro.2022.101020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 06/28/2022] [Indexed: 11/30/2022] Open
Abstract
Purpose The purpose of this study was to evaluate the interobserver variability in the contouring of the gross tumor volume (GTV) on magnetic resonance (MR) imaging and computed tomography (CT) for colorectal liver metastases in the setting of SABR. Methods and Materials Three expert radiation oncologists contoured 10 GTV volumes on 3 MR imaging sequences and on the CT image data set. Three metrics were chosen to evaluate the interobserver variability: the conformity index, the DICE coefficient, and the maximum Hausdorff distance (HDmax). Statistical analysis of the results was performed using a 1-sided permutation test. Results For all 3 metrics, the MR liver acquisition volume acquisition (MR LAVA) showed the lowest interobserver variability. Analysis showed a significant difference (P < .01) in the mean DICE, an overlap metric, for MR LAVA (0.82) and CT (0.74). The HDmax that highlights boundary errors also showed a significant difference (P = .04) with MR LAVA having a lower mean HDmax (7.2 mm) compared with CT (5.7 mm). The mean HDmax for both MR single shot fast spin echo (SSFSE) (19.3 mm) and diffusion weighted image (9.5 mm) showed large interobserver variability with MR SSFSE having a mean HDmax of 19.3 mm. A volume comparison between MR LAVA and CT showed a significantly higher volume for small GTVs (<5 cm3) when using MR LAVA for contouring in comparison to CT. Conclusions This study reported the lowest interobserver variability for the MR LAVA, thus indicating the benefit of using MR to complement CT when contouring GTV for colorectal liver metastases.
Collapse
Affiliation(s)
- Cora Marshall
- School of Physics, University College Dublin, Dublin, Ireland
- Beacon Hospital, Dublin, Ireland
- Corresponding author: Cora Marshall, MSc
| | - Pierre Thirion
- Beacon Hospital, Dublin, Ireland
- St Luke's Radiation Oncology Network, Dublin, Ireland
| | | | | | - Seán Cournane
- School of Physics, University College Dublin, Dublin, Ireland
| | | | - Brendan McClean
- School of Physics, University College Dublin, Dublin, Ireland
- St Luke's Radiation Oncology Network, Dublin, Ireland
| | - John Quinn
- School of Physics, University College Dublin, Dublin, Ireland
| |
Collapse
|
38
|
Deep-Learning-Based Automatic Detection and Segmentation of Brain Metastases with Small Volume for Stereotactic Ablative Radiotherapy. Cancers (Basel) 2022; 14:cancers14102555. [PMID: 35626158 PMCID: PMC9139632 DOI: 10.3390/cancers14102555] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 05/11/2022] [Accepted: 05/18/2022] [Indexed: 02/01/2023] Open
Abstract
Simple Summary With advances in radiotherapy (RT) technique and more frequent use of stereotactic ablative radiotherapy (SABR), precise segmentation of all brain metastases (BM) including a small volume of BM is essential to choose an appropriate treatment modality. However, the process of detecting and manually delineating BM with small volumes often results in missing delineation and requires a great amount of labor. To address this issue, we present a useful deep learning (DL) model for the detection and segmentation of BMwith contrast-enhanced magnetic resonance images. Specifically, we applied effective training techniques to detect and segment a BM of less than 0.04 cc, which is relatively small compared to previous studies. The results of our DL model demonstrated that the proposed methods provide considerable benefit for BM, even small-volume BM, detection, and segmentation for SABR. Abstract Recently, several efforts have been made to develop the deep learning (DL) algorithms for automatic detection and segmentation of brain metastases (BM). In this study, we developed an advanced DL model to BM detection and segmentation, especially for small-volume BM. From the institutional cancer registry, contrast-enhanced magnetic resonance images of 65 patients and 603 BM were collected to train and evaluate our DL model. Of the 65 patients, 12 patients with 58 BM were assigned to test-set for performance evaluation. Ground-truth for BM was assigned to one radiation oncologist to manually delineate BM and another one to cross-check. Unlike other previous studies, our study dealt with relatively small BM, so the area occupied by the BM in the high-resolution images were small. Our study applied training techniques such as the overlapping patch technique and 2.5-dimensional (2.5D) training to the well-known U-Net architecture to learn better in smaller BM. As a DL architecture, 2D U-Net was utilized by 2.5D training. For better efficacy and accuracy of a two-dimensional U-Net, we applied effective preprocessing include 2.5D overlapping patch technique. The sensitivity and average false positive rate were measured as detection performance, and their values were 97% and 1.25 per patient, respectively. The dice coefficient with dilation and 95% Hausdorff distance were measured as segmentation performance, and their values were 75% and 2.057 mm, respectively. Our DL model can detect and segment BM with small volume with good performance. Our model provides considerable benefit for clinicians with automatic detection and segmentation of BM for stereotactic ablative radiotherapy.
Collapse
|
39
|
Kikuchi Y, Togao O, Kikuchi K, Momosaka D, Obara M, Van Cauteren M, Fischer A, Ishigami K, Hiwatashi A. A deep convolutional neural network-based automatic detection of brain metastases with and without blood vessel suppression. Eur Radiol 2022; 32:2998-3005. [PMID: 34993572 DOI: 10.1007/s00330-021-08427-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Revised: 10/12/2021] [Accepted: 10/18/2021] [Indexed: 11/26/2022]
Abstract
OBJECTIVES To develop an automated model to detect brain metastases using a convolutional neural network (CNN) and volume isotropic simultaneous interleaved bright-blood and black-blood examination (VISIBLE) and to compare its diagnostic performance with the observer test. METHODS This retrospective study included patients with clinical suspicion of brain metastases imaged with VISIBLE from March 2016 to July 2019 to create a model. Images with and without blood vessel suppression were used for training an existing CNN (DeepMedic). Diagnostic performance was evaluated using sensitivity and false-positive results per case (FPs/case). We compared the diagnostic performance of the CNN model with that of the twelve radiologists. RESULTS Fifty patients (30 males and 20 females; age range 29-86 years; mean 63.3 ± 12.8 years; a total of 165 metastases) who were clinically diagnosed with brain metastasis on follow-up were used for the training. The sensitivity of our model was 91.7%, which was higher than that of the observer test (mean ± standard deviation; 88.7 ± 3.7%). The number of FPs/case in our model was 1.5, which was greater than that by the observer test (0.17 ± 0.09). CONCLUSIONS Compared to radiologists, our model created by VISIBLE and CNN to diagnose brain metastases showed higher sensitivity. The number of FPs/case by our model was greater than that by the observer test of radiologists; however, it was less than that in most of the previous studies with deep learning. KEY POINTS • Our convolutional neural network based on bright-blood and black-blood examination to diagnose brain metastases showed a higher sensitivity than that by the observer test. • The number of false-positives/case by our model was greater than that by the previous observer test; however, it was less than those from most previous studies. • In our model, false-positives were found in the vessels, choroid plexus, and image noise or unknown causes.
Collapse
Affiliation(s)
- Yoshitomo Kikuchi
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Osamu Togao
- Department of Molecular Imaging and Diagnosis, Graduate School of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Kazufumi Kikuchi
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Daichi Momosaka
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Makoto Obara
- MR Clinical Science, Philips Japan Ltd, Tokyo, Japan
| | | | | | - Kousei Ishigami
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan
| | - Akio Hiwatashi
- Department of Clinical Radiology, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashi-ku, Fukuoka, 812-8582, Japan.
| |
Collapse
|
40
|
Deep Transfer Learning for Automatic Prediction of Hemorrhagic Stroke on CT Images. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:3560507. [PMID: 35469220 PMCID: PMC9034929 DOI: 10.1155/2022/3560507] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 03/29/2022] [Indexed: 11/21/2022]
Abstract
Intracerebral hemorrhage (ICH) is the most common type of hemorrhagic stroke which occurs due to ruptures of weakened blood vessel in brain tissue. It is a serious medical emergency issues that needs immediate treatment. Large numbers of noncontrast-computed tomography (NCCT) brain images are analyzed manually by radiologists to diagnose the hemorrhagic stroke, which is a difficult and time-consuming process. In this study, we propose an automated transfer deep learning method that combines ResNet-50 and dense layer for accurate prediction of intracranial hemorrhage on NCCT brain images. A total of 1164 NCCT brain images were collected from 62 patients with hemorrhagic stroke from Kalinga Institute of Medical Science, Bhubaneswar and used for evaluating the model. The proposed model takes individual CT images as input and classifies them as hemorrhagic or normal. This deep transfer learning approach reached 99.6% accuracy, 99.7% specificity, and 99.4% sensitivity which are better results than that of ResNet-50 only. It is evident that the deep transfer learning model has advantages for automatic diagnosis of hemorrhagic stroke and has the potential to be used as a clinical decision support tool to assist radiologists in stroke diagnosis.
Collapse
|
41
|
Dikici E, Nguyen XV, Bigelow M, Prevedello LM. Augmented Networks for Faster Brain Metastases Detection in T1-Weighted Contrast-Enhanced 3D MRI. Comput Med Imaging Graph 2022; 98:102059. [DOI: 10.1016/j.compmedimag.2022.102059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 01/21/2022] [Accepted: 03/17/2022] [Indexed: 10/18/2022]
|
42
|
Shirokikh B, Dalechina A, Shevtsov A, Krivov E, Kostjuchenko V, Durgaryan A, Galkin M, Golanov A, Belyaev M. Systematic Clinical Evaluation of A Deep Learning Method for Medical Image Segmentation: Radiosurgery Application. IEEE J Biomed Health Inform 2022; 26:3037-3046. [PMID: 35213318 DOI: 10.1109/jbhi.2022.3153394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
We systematically evaluate a Deep Learning model in a 3D medical image segmentation task. With our model, we address the flaws of manual segmentation: high inter-rater contouring variability and time consumption of the contouring process. The main extension over the existing evaluations is the careful and detailed analysis that could be further generalized on other medical image segmentation tasks. Firstly, we analyze the changes in the inter-rater detection agreement. We show that the model reduces the number of detection disagreements by 48% (p < 0.05). Secondly, we show that the model improves the inter-rater contouring agreement from 0.845 to 0.871 surface Dice Score (p < 0.05). Thirdly, we show that the model accelerates the delineation process between 1.6 and 2.0 times (p < 0.05). Finally, we design the setup of the clinical experiment to either exclude or estimate the evaluation biases; thus, preserving the significance of the results. Besides the clinical evaluation, we also share intuitions and practical ideas for building an efficient DL-based model for 3D medical image segmentation.
Collapse
|
43
|
Omari EA, Zhang Y, Ahunbay E, Paulson E, Amjad A, Chen X, Liang Y, Li XA. Multi parametric magnetic resonance imaging for radiation treatment planning. Med Phys 2022; 49:2836-2845. [PMID: 35170769 DOI: 10.1002/mp.15534] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 10/05/2021] [Accepted: 01/03/2022] [Indexed: 11/09/2022] Open
Abstract
In recent years, multi-parametric magnetic resonance imaging (MpMRI) has played a major role in radiation therapy treatment planning. The superior soft tissue contrast, functional or physiological imaging capabilities and the flexibility of site-specific image sequence development has placed MpMRI at the forefront. In this article, the present status of MpMRI for external beam radiation therapy planning is reviewed. Common MpMRI sequences, preprocessing and QA strategies are briefly discussed, and various image registration techniques and strategies are addressed. Image segmentation methods including automatic segmentation and deep learning techniques for organs at risk and target delineation are reviewed. Due to the advancement in MRI guided online adaptive radiotherapy, treatment planning considerations addressing MRI only planning are also discussed. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Eenas A Omari
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Ying Zhang
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Ergun Ahunbay
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Eric Paulson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Asma Amjad
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Xinfeng Chen
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - Ying Liang
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| | - X Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, USA
| |
Collapse
|
44
|
Yang Z, Chen M, Kazemimoghadam M, Ma L, Stojadinovic S, Timmerman R, Dan T, Wardak Z, Lu W, Gu X. Deep-learning and radiomics ensemble classifier for false positive reduction in brain metastases segmentation. Phys Med Biol 2022; 67:10.1088/1361-6560/ac4667. [PMID: 34952535 PMCID: PMC8858586 DOI: 10.1088/1361-6560/ac4667] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 12/24/2021] [Indexed: 01/21/2023]
Abstract
Stereotactic radiosurgery (SRS) is now the standard of care for brain metastases (BMs) patients. The SRS treatment planning process requires precise target delineation, which in clinical workflow for patients with multiple (>4) BMs (mBMs) could become a pronounced time bottleneck. Our group has developed an automated BMs segmentation platform to assist in this process. The accuracy of the auto-segmentation, however, is influenced by the presence of false-positive segmentations, mainly caused by the injected contrast during MRI acquisition. To address this problem and further improve the segmentation performance, a deep-learning and radiomics ensemble classifier was developed to reduce the false-positive rate in segmentations. The proposed model consists of a Siamese network and a radiomic-based support vector machine (SVM) classifier. The 2D-based Siamese network contains a pair of parallel feature extractors with shared weights followed by a single classifier. This architecture is designed to identify the inter-class difference. On the other hand, the SVM model takes the radiomic features extracted from 3D segmentation volumes as the input for twofold classification, either a false-positive segmentation or a true BM. Lastly, the outputs from both models create an ensemble to generate the final label. The performance of the proposed model in the segmented mBMs testing dataset reached the accuracy (ACC), sensitivity (SEN), specificity (SPE) and area under the curve of 0.91, 0.96, 0.90 and 0.93, respectively. After integrating the proposed model into the original segmentation platform, the average segmentation false negative rate (FNR) and the false positive over the union (FPoU) were 0.13 and 0.09, respectively, which preserved the initial FNR (0.07) and significantly improved the FPoU (0.55). The proposed method effectively reduced the false-positive rate in the BMs raw segmentations indicating that the integration of the proposed ensemble classifier into the BMs segmentation platform provides a beneficial tool for mBMs SRS management.
Collapse
Affiliation(s)
- Zi Yang
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Mingli Chen
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Mahdieh Kazemimoghadam
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Lin Ma
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Strahinja Stojadinovic
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Robert Timmerman
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Tu Dan
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Zabi Wardak
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA
| | - Weiguo Lu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA,; ,
| | - Xuejun Gu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas TX, 75390 USA,Department of Radiation Oncology, Stanford University, Stanford, CA 94305,; ,
| |
Collapse
|
45
|
Liu X, Han C, Cui Y, Xie T, Zhang X, Wang X. Detection and Segmentation of Pelvic Bones Metastases in MRI Images for Patients With Prostate Cancer Based on Deep Learning. Front Oncol 2021; 11:773299. [PMID: 34912716 PMCID: PMC8666439 DOI: 10.3389/fonc.2021.773299] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 11/08/2021] [Indexed: 12/23/2022] Open
Abstract
Objective To establish and evaluate the 3D U-Net model for automated segmentation and detection of pelvic bone metastases in patients with prostate cancer (PCa) using diffusion-weighted imaging (DWI) and T1 weighted imaging (T1WI) images. Methods The model consisted of two 3D U-Net algorithms. A total of 859 patients with clinically suspected or confirmed PCa between January 2017 and December 2020 were enrolled for the first 3D U-Net development of pelvic bony structure segmentation. Then, 334 PCa patients were selected for the model development of bone metastases segmentation. Additionally, 63 patients from January to May 2021 were recruited for the external evaluation of the network. The network was developed using DWI and T1WI images as input. Dice similarity coefficient (DSC), volumetric similarity (VS), and Hausdorff distance (HD) were used to evaluate the segmentation performance. Sensitivity, specificity, and area under the curve (AUC) were used to evaluate the detection performance at the patient level; recall, precision, and F1-score were assessed at the lesion level. Results The pelvic bony structures segmentation on DWI and T1WI images had mean DSC and VS values above 0.85, and the HD values were <15 mm. In the testing set, the AUC of the metastases detection at the patient level were 0.85 and 0.80 on DWI and T1WI images. At the lesion level, the F1-score achieved 87.6% and 87.8% concerning metastases detection on DWI and T1WI images, respectively. In the external dataset, the AUC of the model for M-staging was 0.94 and 0.89 on DWI and T1WI images. Conclusion The deep learning-based 3D U-Net network yields accurate detection and segmentation of pelvic bone metastases for PCa patients on DWI and T1WI images, which lays a foundation for the whole-body skeletal metastases assessment.
Collapse
Affiliation(s)
- Xiang Liu
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Chao Han
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Yingpu Cui
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Tingting Xie
- Department of Radiology, Peking University Shenzhen Hospital, Shenzhen, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, Beijing, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, Beijing, China
| |
Collapse
|
46
|
Risk Factors of Restroke in Patients with Lacunar Cerebral Infarction Using Magnetic Resonance Imaging Image Features under Deep Learning Algorithm. CONTRAST MEDIA & MOLECULAR IMAGING 2021; 2021:2527595. [PMID: 34887708 PMCID: PMC8616697 DOI: 10.1155/2021/2527595] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 10/18/2021] [Accepted: 10/23/2021] [Indexed: 02/08/2023]
Abstract
This study was aimed to explore the magnetic resonance imaging (MRI) image features based on the fuzzy local information C-means clustering (FLICM) image segmentation method to analyze the risk factors of restroke in patients with lacunar infarction. In this study, based on the FLICM algorithm, the Canny edge detection algorithm and the Fourier shape descriptor were introduced to optimize the algorithm. The difference of Jaccard coefficient, Dice coefficient, peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), running time, and segmentation accuracy of the optimized FLICM algorithm and other algorithms when the brain tissue MRI images were segmented was studied. 36 patients with lacunar infarction were selected as the research objects, and they were divided into a control group (no restroke, 20 cases) and a stroke group (restroke, 16 cases) according to whether the patients had restroke. The differences in MRI imaging characteristics of the two groups of patients were compared, and the risk factors for restroke in lacunar infarction were analyzed by logistic multivariate regression. The results showed that the Jaccard coefficient, Dice coefficient, PSNR value, and SSIM value of the optimized FLICM algorithm for segmenting brain tissue were all higher than those of other algorithms. The shortest running time was 26 s, and the highest accuracy rate was 97.86%. The proportion of patients with a history of hypertension, the proportion of patients with paraventricular white matter lesion (WML) score greater than 2 in the stroke group, the proportion of patients with a deep WML score of 2, and the average age of patients in the stroke group were much higher than those in the control group (P < 0.05). Logistic multivariate regression showed that age and history of hypertension were risk factors for restroke after lacunar infarction (P < 0.05). It showed that the optimized FLICM algorithm can effectively segment brain MRI images, and the risk factors for restroke in patients with lacunar infarction were age and hypertension history. This study could provide a reference for the diagnosis and prognosis of lacunar infarction.
Collapse
|
47
|
Williams S, Layard Horsfall H, Funnell JP, Hanrahan JG, Khan DZ, Muirhead W, Stoyanov D, Marcus HJ. Artificial Intelligence in Brain Tumour Surgery-An Emerging Paradigm. Cancers (Basel) 2021; 13:cancers13195010. [PMID: 34638495 PMCID: PMC8508169 DOI: 10.3390/cancers13195010] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 10/02/2021] [Accepted: 10/03/2021] [Indexed: 01/01/2023] Open
Abstract
Artificial intelligence (AI) platforms have the potential to cause a paradigm shift in brain tumour surgery. Brain tumour surgery augmented with AI can result in safer and more effective treatment. In this review article, we explore the current and future role of AI in patients undergoing brain tumour surgery, including aiding diagnosis, optimising the surgical plan, providing support during the operation, and better predicting the prognosis. Finally, we discuss barriers to the successful clinical implementation, the ethical concerns, and we provide our perspective on how the field could be advanced.
Collapse
Affiliation(s)
- Simon Williams
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
- Correspondence:
| | - Hugo Layard Horsfall
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Jonathan P. Funnell
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - John G. Hanrahan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Danyal Z. Khan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - William Muirhead
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Danail Stoyanov
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Hani J. Marcus
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| |
Collapse
|
48
|
Hsu DG, Ballangrud Å, Shamseddine A, Deasy JO, Veeraraghavan H, Cervino L, Beal K, Aristophanous M. Automatic segmentation of brain metastases using T1 magnetic resonance and computed tomography images. Phys Med Biol 2021; 66. [PMID: 34315148 DOI: 10.1088/1361-6560/ac1835] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 07/27/2021] [Indexed: 12/26/2022]
Abstract
An increasing number of patients with multiple brain metastases are being treated with stereotactic radiosurgery (SRS). Manually identifying and contouring all metastatic lesions is difficult and time-consuming, and a potential source of variability. Hence, we developed a 3D deep learning approach for segmenting brain metastases on MR and CT images. Five-hundred eleven patients treated with SRS were retrospectively identified for this study. Prior to radiotherapy, the patients were imaged with 3D T1 spoiled-gradient MR post-Gd (T1 + C) and contrast-enhanced CT (CECT), which were co-registered by a treatment planner. The gross tumor volume contours, authored by the attending radiation oncologist, were taken as the ground truth. There were 3 ± 4 metastases per patient, with volume up to 57 ml. We produced a multi-stage model that automatically performs brain extraction, followed by detection and segmentation of brain metastases using co-registered T1 + C and CECT. Augmented data from 80% of these patients were used to train modified 3D V-Net convolutional neural networks for this task. We combined a normalized boundary loss function with soft Dice loss to improve the model optimization, and employed gradient accumulation to stabilize the training. The average Dice similarity coefficient (DSC) for brain extraction was 0.975 ± 0.002 (95% CI). The detection sensitivity per metastasis was 90% (329/367), with moderate dependence on metastasis size. Averaged across 102 test patients, our approach had metastasis detection sensitivity 95 ± 3%, 2.4 ± 0.5 false positives, DSC of 0.76 ± 0.03, and 95th-percentile Hausdorff distance of 2.5 ± 0.3 mm (95% CIs). The volumes of automatic and manual segmentations were strongly correlated for metastases of volume up to 20 ml (r=0.97,p<0.001). This work expounds a fully 3D deep learning approach capable of automatically detecting and segmenting brain metastases using co-registered T1 + C and CECT.
Collapse
Affiliation(s)
- Dylan G Hsu
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Åse Ballangrud
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Achraf Shamseddine
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Joseph O Deasy
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Laura Cervino
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Kathryn Beal
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Michalis Aristophanous
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| |
Collapse
|
49
|
Robert C, Munoz A, Moreau D, Mazurier J, Sidorski G, Gasnier A, Beldjoudi G, Grégoire V, Deutsch E, Meyer P, Simon L. Clinical implementation of deep-learning based auto-contouring tools-Experience of three French radiotherapy centers. Cancer Radiother 2021; 25:607-616. [PMID: 34389243 DOI: 10.1016/j.canrad.2021.06.023] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 06/17/2021] [Accepted: 06/18/2021] [Indexed: 12/23/2022]
Abstract
Deep-learning (DL)-based auto-contouring solutions have recently been proposed as a convincing alternative to decrease workload of target volumes and organs-at-risk (OAR) delineation in radiotherapy planning and improve inter-observer consistency. However, there is minimal literature of clinical implementations of such algorithms in a clinical routine. In this paper we first present an update of the state-of-the-art of DL-based solutions. We then summarize recent recommendations proposed by the European society for radiotherapy and oncology (ESTRO) to be followed before any clinical implementation of artificial intelligence-based solutions in clinic. The last section describes the methodology carried out by three French radiation oncology departments to deploy CE-marked commercial solutions. Based on the information collected, a majority of OAR are retained by the centers among those proposed by the manufacturers, validating the usefulness of DL-based models to decrease clinicians' workload. Target volumes, with the exception of lymph node areas in breast, head and neck and pelvic regions, whole breast, breast wall, prostate and seminal vesicles, are not available in the three commercial solutions at this time. No implemented workflows are currently available to continuously improve the models, but these can be adapted/retrained in some solutions during the commissioning phase to best fit local practices. In reported experiences, automatic workflows were implemented to limit human interactions and make the workflow more fluid. Recommendations published by the ESTRO group will be of importance for guiding physicists in the clinical implementation of patient specific and regular quality assurances.
Collapse
Affiliation(s)
- C Robert
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France.
| | - A Munoz
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - D Moreau
- Department of Radiotherapy, Hôpital Européen Georges-Pompidou, Paris, France
| | - J Mazurier
- Department of Radiotherapy, Clinique Pasteur-Oncorad, Toulouse, France
| | - G Sidorski
- Department of Radiotherapy, Clinique Pasteur-Oncorad, Toulouse, France
| | - A Gasnier
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France
| | - G Beldjoudi
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - V Grégoire
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - E Deutsch
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France
| | - P Meyer
- Service d'Oncologie Radiothérapie, Institut de Cancérologie Strasbourg Europe (Icans), Strasbourg, France
| | - L Simon
- Institut Claudius Regaud (ICR), Institut Universitaire du Cancer de Toulouse - Oncopole (IUCT-O), Toulouse, France
| |
Collapse
|
50
|
Automatic segmentation of uterine endometrial cancer on multi-sequence MRI using a convolutional neural network. Sci Rep 2021; 11:14440. [PMID: 34262088 PMCID: PMC8280152 DOI: 10.1038/s41598-021-93792-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 06/29/2021] [Indexed: 12/29/2022] Open
Abstract
Endometrial cancer (EC) is the most common gynecological tumor in developed countries, and preoperative risk stratification is essential for personalized medicine. There have been several radiomics studies for noninvasive risk stratification of EC using MRI. Although tumor segmentation is usually necessary for these studies, manual segmentation is not only labor-intensive but may also be subjective. Therefore, our study aimed to perform the automatic segmentation of EC on MRI with a convolutional neural network. The effect of the input image sequence and batch size on the segmentation performance was also investigated. Of 200 patients with EC, 180 patients were used for training the modified U-net model; 20 patients for testing the segmentation performance and the robustness of automatically extracted radiomics features. Using multi-sequence images and larger batch size was effective for improving segmentation accuracy. The mean Dice similarity coefficient, sensitivity, and positive predictive value of our model for the test set were 0.806, 0.816, and 0.834, respectively. The robustness of automatically extracted first-order and shape-based features was high (median ICC = 0.86 and 0.96, respectively). Other high-order features presented moderate-high robustness (median ICC = 0.57–0.93). Our model could automatically segment EC on MRI and extract radiomics features with high reliability.
Collapse
|