1
|
Machura B, Kucharski D, Bozek O, Eksner B, Kokoszka B, Pekala T, Radom M, Strzelczak M, Zarudzki L, Gutiérrez-Becker B, Krason A, Tessier J, Nalepa J. Deep learning ensembles for detecting brain metastases in longitudinal multi-modal MRI studies. Comput Med Imaging Graph 2024; 116:102401. [PMID: 38795690 DOI: 10.1016/j.compmedimag.2024.102401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 05/13/2024] [Accepted: 05/13/2024] [Indexed: 05/28/2024]
Abstract
Metastatic brain cancer is a condition characterized by the migration of cancer cells to the brain from extracranial sites. Notably, metastatic brain tumors surpass primary brain tumors in prevalence by a significant factor, they exhibit an aggressive growth potential and have the capacity to spread across diverse cerebral locations simultaneously. Magnetic resonance imaging (MRI) scans of individuals afflicted with metastatic brain tumors unveil a wide spectrum of characteristics. These lesions vary in size and quantity, spanning from tiny nodules to substantial masses captured within MRI. Patients may present with a limited number of lesions or an extensive burden of hundreds of them. Moreover, longitudinal studies may depict surgical resection cavities, as well as areas of necrosis or edema. Thus, the manual analysis of such MRI scans is difficult, user-dependent and cost-inefficient, and - importantly - it lacks reproducibility. We address these challenges and propose a pipeline for detecting and analyzing brain metastases in longitudinal studies, which benefits from an ensemble of various deep learning architectures originally designed for different downstream tasks (detection and segmentation). The experiments, performed over 275 multi-modal MRI scans of 87 patients acquired in 53 sites, coupled with rigorously validated manual annotations, revealed that our pipeline, built upon open-source tools to ensure its reproducibility, offers high-quality detection, and allows for precisely tracking the disease progression. To objectively quantify the generalizability of models, we introduce a new data stratification approach that accommodates the heterogeneity of the dataset and is used to elaborate training-test splits in a data-robust manner, alongside a new set of quality metrics to objectively assess algorithms. Our system provides a fully automatic and quantitative approach that may support physicians in a laborious process of disease progression tracking and evaluation of treatment efficacy.
Collapse
Affiliation(s)
| | - Damian Kucharski
- Graylight Imaging, Gliwice, Poland; Silesian University of Technology, Gliwice, Poland.
| | - Oskar Bozek
- Department of Radiodiagnostics and Invasive Radiology, School of Medicine in Katowice, Medical University of Silesia in Katowice, Katowice, Poland.
| | - Bartosz Eksner
- Department of Radiology and Nuclear Medicine, ZSM Chorzów, Chorzów, Poland.
| | - Bartosz Kokoszka
- Department of Radiodiagnostics and Invasive Radiology, School of Medicine in Katowice, Medical University of Silesia in Katowice, Katowice, Poland.
| | - Tomasz Pekala
- Department of Radiodiagnostics, Interventional Radiology and Nuclear Medicine, University Clinical Centre, Katowice, Poland.
| | - Mateusz Radom
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Marek Strzelczak
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Lukasz Zarudzki
- Department of Radiology and Diagnostic Imaging, Maria Skłodowska-Curie National Research Institute of Oncology, Gliwice Branch, Gliwice, Poland.
| | - Benjamín Gutiérrez-Becker
- Roche Pharma Research and Early Development, Informatics, Roche Innovation Center Basel, Basel, Switzerland.
| | - Agata Krason
- Roche Pharma Research and Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland.
| | - Jean Tessier
- Roche Pharma Research and Early Development, Early Clinical Development Oncology, Roche Innovation Center Basel, Basel, Switzerland.
| | - Jakub Nalepa
- Graylight Imaging, Gliwice, Poland; Silesian University of Technology, Gliwice, Poland.
| |
Collapse
|
2
|
Cho SJ, Cho W, Choi D, Sim G, Jeong SY, Baik SH, Bae YJ, Choi BS, Kim JH, Yoo S, Han JH, Kim CY, Choo J, Sunwoo L. Prediction of treatment response after stereotactic radiosurgery of brain metastasis using deep learning and radiomics on longitudinal MRI data. Sci Rep 2024; 14:11085. [PMID: 38750084 PMCID: PMC11096355 DOI: 10.1038/s41598-024-60781-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 04/26/2024] [Indexed: 05/18/2024] Open
Abstract
We developed artificial intelligence models to predict the brain metastasis (BM) treatment response after stereotactic radiosurgery (SRS) using longitudinal magnetic resonance imaging (MRI) data and evaluated prediction accuracy changes according to the number of sequential MRI scans. We included four sequential MRI scans for 194 patients with BM and 369 target lesions for the Developmental dataset. The data were randomly split (8:2 ratio) for training and testing. For external validation, 172 MRI scans from 43 patients with BM and 62 target lesions were additionally enrolled. The maximum axial diameter (Dmax), radiomics, and deep learning (DL) models were generated for comparison. We evaluated the simple convolutional neural network (CNN) model and a gated recurrent unit (Conv-GRU)-based CNN model in the DL arm. The Conv-GRU model performed superior to the simple CNN models. For both datasets, the area under the curve (AUC) was significantly higher for the two-dimensional (2D) Conv-GRU model than for the 3D Conv-GRU, Dmax, and radiomics models. The accuracy of the 2D Conv-GRU model increased with the number of follow-up studies. In conclusion, using longitudinal MRI data, the 2D Conv-GRU model outperformed all other models in predicting the treatment response after SRS of BM.
Collapse
Affiliation(s)
- Se Jin Cho
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Wonwoo Cho
- Kim Jaechul Graduate School of Artificial Intelligence, KAIST, 291 Daehak-Ro, Yuseong-Gu, Daejeon, 34141, Republic of Korea
- Letsur Inc, 180 Yeoksam-Ro, Gangnam-Gu, Seoul, 06248, Republic of Korea
| | - Dongmin Choi
- Kim Jaechul Graduate School of Artificial Intelligence, KAIST, 291 Daehak-Ro, Yuseong-Gu, Daejeon, 34141, Republic of Korea
- Letsur Inc, 180 Yeoksam-Ro, Gangnam-Gu, Seoul, 06248, Republic of Korea
| | - Gyuhyeon Sim
- Kim Jaechul Graduate School of Artificial Intelligence, KAIST, 291 Daehak-Ro, Yuseong-Gu, Daejeon, 34141, Republic of Korea
- Letsur Inc, 180 Yeoksam-Ro, Gangnam-Gu, Seoul, 06248, Republic of Korea
| | - So Yeong Jeong
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Sung Hyun Baik
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Yun Jung Bae
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Byung Se Choi
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Jae Hyoung Kim
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Sooyoung Yoo
- Office of eHealth Research and Business, Seoul National University Bundang Hospital, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Jung Ho Han
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Chae-Yong Kim
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea
| | - Jaegul Choo
- Kim Jaechul Graduate School of Artificial Intelligence, KAIST, 291 Daehak-Ro, Yuseong-Gu, Daejeon, 34141, Republic of Korea.
- Letsur Inc, 180 Yeoksam-Ro, Gangnam-Gu, Seoul, 06248, Republic of Korea.
| | - Leonard Sunwoo
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea.
- Center for Artificial Intelligence in Healthcare, Seoul National University Bundang Hospital, 82, Gumi-Ro 173Beon-Gil, Bundang-Gu, Seongnam, Gyeonggi, 13620, Republic of Korea.
| |
Collapse
|
3
|
Kim M, Wang JY, Lu W, Jiang H, Stojadinovic S, Wardak Z, Dan T, Timmerman R, Wang L, Chuang C, Szalkowski G, Liu L, Pollom E, Rahimy E, Soltys S, Chen M, Gu X. Where Does Auto-Segmentation for Brain Metastases Radiosurgery Stand Today? Bioengineering (Basel) 2024; 11:454. [PMID: 38790322 PMCID: PMC11117895 DOI: 10.3390/bioengineering11050454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 04/26/2024] [Accepted: 04/30/2024] [Indexed: 05/26/2024] Open
Abstract
Detection and segmentation of brain metastases (BMs) play a pivotal role in diagnosis, treatment planning, and follow-up evaluations for effective BM management. Given the rising prevalence of BM cases and its predominantly multiple onsets, automated segmentation is becoming necessary in stereotactic radiosurgery. It not only alleviates the clinician's manual workload and improves clinical workflow efficiency but also ensures treatment safety, ultimately improving patient care. Recent strides in machine learning, particularly in deep learning (DL), have revolutionized medical image segmentation, achieving state-of-the-art results. This review aims to analyze auto-segmentation strategies, characterize the utilized data, and assess the performance of cutting-edge BM segmentation methodologies. Additionally, we delve into the challenges confronting BM segmentation and share insights gleaned from our algorithmic and clinical implementation experiences.
Collapse
Affiliation(s)
- Matthew Kim
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Jen-Yeu Wang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Weiguo Lu
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Hao Jiang
- NeuralRad LLC, Madison, WI 53717, USA
| | | | - Zabi Wardak
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Tu Dan
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Robert Timmerman
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Lei Wang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Cynthia Chuang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Gregory Szalkowski
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Lianli Liu
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Erqi Pollom
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Elham Rahimy
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Scott Soltys
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Mingli Chen
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Xuejun Gu
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
4
|
Higgins H, Nakhla A, Lotfalla A, Khalil D, Doshi P, Thakkar V, Shirini D, Bebawy M, Ammari S, Lopci E, Schwartz LH, Postow M, Dercle L. Recent Advances in the Field of Artificial Intelligence for Precision Medicine in Patients with a Diagnosis of Metastatic Cutaneous Melanoma. Diagnostics (Basel) 2023; 13:3483. [PMID: 37998619 PMCID: PMC10670510 DOI: 10.3390/diagnostics13223483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 10/27/2023] [Accepted: 10/31/2023] [Indexed: 11/25/2023] Open
Abstract
Standard-of-care medical imaging techniques such as CT, MRI, and PET play a critical role in managing patients diagnosed with metastatic cutaneous melanoma. Advancements in artificial intelligence (AI) techniques, such as radiomics, machine learning, and deep learning, could revolutionize the use of medical imaging by enhancing individualized image-guided precision medicine approaches. In the present article, we will decipher how AI/radiomics could mine information from medical images, such as tumor volume, heterogeneity, and shape, to provide insights into cancer biology that can be leveraged by clinicians to improve patient care both in the clinic and in clinical trials. More specifically, we will detail the potential role of AI in enhancing detection/diagnosis, staging, treatment planning, treatment delivery, response assessment, treatment toxicity assessment, and monitoring of patients diagnosed with metastatic cutaneous melanoma. Finally, we will explore how these proof-of-concept results can be translated from bench to bedside by describing how the implementation of AI techniques can be standardized for routine adoption in clinical settings worldwide to predict outcomes with great accuracy, reproducibility, and generalizability in patients diagnosed with metastatic cutaneous melanoma.
Collapse
Affiliation(s)
- Hayley Higgins
- Department of Clinical Medicine, Touro College of Osteopathic Medicine, Middletown, NY 10940, USA; (A.L.); (M.B.)
| | - Abanoub Nakhla
- Department of Clinical Medicine, American University of the Caribbean School of Medicine, 33027 Cupecoy, Sint Maarten, The Netherlands;
| | - Andrew Lotfalla
- Department of Clinical Medicine, Touro College of Osteopathic Medicine, Middletown, NY 10940, USA; (A.L.); (M.B.)
| | - David Khalil
- Department of Clinical Medicine, Campbell University School of Osteopathic Medicine, Lillington, NC 27546, USA; (D.K.); (P.D.); (V.T.)
| | - Parth Doshi
- Department of Clinical Medicine, Campbell University School of Osteopathic Medicine, Lillington, NC 27546, USA; (D.K.); (P.D.); (V.T.)
| | - Vandan Thakkar
- Department of Clinical Medicine, Campbell University School of Osteopathic Medicine, Lillington, NC 27546, USA; (D.K.); (P.D.); (V.T.)
| | - Dorsa Shirini
- Department of Radiology, Shahid Beheshti University of Medical Sciences, Tehran 1981619573, Iran;
| | - Maria Bebawy
- Department of Clinical Medicine, Touro College of Osteopathic Medicine, Middletown, NY 10940, USA; (A.L.); (M.B.)
| | - Samy Ammari
- Département d’Imagerie Médicale Biomaps, UMR1281 INSERM, CEA, CNRS, Gustave Roussy, Université Paris-Saclay, 94800 Villejuif, France;
- ELSAN Département de Radiologie, Institut de Cancérologie Paris Nord, 95200 Sarcelles, France
| | - Egesta Lopci
- Nuclear Medicine Unit, IRCCS Humanitas Research Hospital, 20089 Rozzano, Italy;
| | - Lawrence H. Schwartz
- Department of Radiology, New York-Presbyterian, Columbia University Irving Medical Center, New York, NY 10032, USA;
| | - Michael Postow
- Melanoma Service, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
- Weill Cornell Medical College, New York, NY 10065, USA
| | - Laurent Dercle
- Department of Radiology, Shahid Beheshti University of Medical Sciences, Tehran 1981619573, Iran;
| |
Collapse
|
5
|
Garg P, Mohanty A, Ramisetty S, Kulkarni P, Horne D, Pisick E, Salgia R, Singhal SS. Artificial intelligence and allied subsets in early detection and preclusion of gynecological cancers. Biochim Biophys Acta Rev Cancer 2023; 1878:189026. [PMID: 37980945 DOI: 10.1016/j.bbcan.2023.189026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 11/09/2023] [Accepted: 11/14/2023] [Indexed: 11/21/2023]
Abstract
Gynecological cancers including breast, cervical, ovarian, uterine, and vaginal, pose the greatest threat to world health, with early identification being crucial to patient outcomes and survival rates. The application of machine learning (ML) and artificial intelligence (AI) approaches to the study of gynecological cancer has shown potential to revolutionize cancer detection and diagnosis. The current review outlines the significant advancements, obstacles, and prospects brought about by AI and ML technologies in the timely identification and accurate diagnosis of different types of gynecological cancers. The AI-powered technologies can use genomic data to discover genetic alterations and biomarkers linked to a particular form of gynecologic cancer, assisting in the creation of targeted treatments. Furthermore, it has been shown that the potential benefits of AI and ML technologies in gynecologic tumors can greatly increase the accuracy and efficacy of cancer diagnosis, reduce diagnostic delays, and possibly eliminate the need for needless invasive operations. In conclusion, the review focused on the integrative part of AI and ML based tools and techniques in the early detection and exclusion of various cancer types; together with a collaborative coordination between research clinicians, data scientists, and regulatory authorities, which is suggested to realize the full potential of AI and ML in gynecologic cancer care.
Collapse
Affiliation(s)
- Pankaj Garg
- Department of Chemistry, GLA University, Mathura, Uttar Pradesh 281406, India
| | - Atish Mohanty
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Sravani Ramisetty
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Prakash Kulkarni
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - David Horne
- Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Evan Pisick
- Department of Medical Oncology, City of Hope, Chicago, IL 60099, USA
| | - Ravi Salgia
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA
| | - Sharad S Singhal
- Departments of Medical Oncology & Therapeutics Research, Molecular Medicine, Beckman Research Institute of City of Hope, Comprehensive Cancer Center and National Medical Center, Duarte, CA 91010, USA.
| |
Collapse
|
6
|
Luo X, Yang Y, Yin S, Li H, Zhang W, Xu G, Fan W, Zheng D, Li J, Shen D, Gao Y, Shao Y, Ban X, Li J, Lian S, Zhang C, Ma L, Lin C, Luo Y, Zhou F, Wang S, Sun Y, Zhang R, Xie C. False-negative and false-positive outcomes of computer-aided detection on brain metastasis: Secondary analysis of a multicenter, multireader study. Neuro Oncol 2023; 25:544-556. [PMID: 35943350 PMCID: PMC10013637 DOI: 10.1093/neuonc/noac192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Indexed: 11/14/2022] Open
Abstract
BACKGROUND Errors have seldom been evaluated in computer-aided detection on brain metastases. This study aimed to analyze false negatives (FNs) and false positives (FPs) generated by a brain metastasis detection system (BMDS) and by readers. METHODS A deep learning-based BMDS was developed and prospectively validated in a multicenter, multireader study. Ad hoc secondary analysis was restricted to the prospective participants (148 with 1,066 brain metastases and 152 normal controls). Three trainees and 3 experienced radiologists read the MRI images without and with the BMDS. The number of FNs and FPs per patient, jackknife alternative free-response receiver operating characteristic figure of merit (FOM), and lesion features associated with FNs were analyzed for the BMDS and readers using binary logistic regression. RESULTS The FNs, FPs, and the FOM of the stand-alone BMDS were 0.49, 0.38, and 0.97, respectively. Compared with independent reading, BMDS-assisted reading generated 79% fewer FNs (1.98 vs 0.42, P < .001); 41% more FPs (0.17 vs 0.24, P < .001) but 125% more FPs for trainees (P < .001); and higher FOM (0.87 vs 0.98, P < .001). Lesions with small size, greater number, irregular shape, lower signal intensity, and located on nonbrain surface were associated with FNs for readers. Small, irregular, and necrotic lesions were more frequently found in FNs for BMDS. The FPs mainly resulted from small blood vessels for the BMDS and the readers. CONCLUSIONS Despite the improvement in detection performance, attention should be paid to FPs and small lesions with lower enhancement for radiologists, especially for less-experienced radiologists.
Collapse
Affiliation(s)
- Xiao Luo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yadi Yang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shaohan Yin
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Hui Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Weijing Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Guixiao Xu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Weixiong Fan
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Dechun Zheng
- Department of Radiology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, Fujian Province, China
| | - Jianpeng Li
- Department of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Guangzhou, China
| | - Dinggang Shen
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.,School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yaozong Gao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Shao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.,Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Xiaohua Ban
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Jing Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shanshan Lian
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Cheng Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Lidi Ma
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Cuiping Lin
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yingwei Luo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Fan Zhou
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Shiyuan Wang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Ying Sun
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Rong Zhang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Chuanmiao Xie
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China.,Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| |
Collapse
|
7
|
A Deep Learning-Based Computer Aided Detection (CAD) System for Difficult-to-Detect Brain Metastases. Int J Radiat Oncol Biol Phys 2023; 115:779-793. [PMID: 36289038 DOI: 10.1016/j.ijrobp.2022.09.068] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 08/09/2022] [Accepted: 09/07/2022] [Indexed: 01/19/2023]
Abstract
PURPOSE We sought to develop a computer-aided detection (CAD) system that optimally augments human performance, excelling especially at identifying small inconspicuous brain metastases (BMs), by training a convolutional neural network on a unique magnetic resonance imaging (MRI) data set containing subtle BMs that were not detected prospectively during routine clinical care. METHODS AND MATERIALS Patients receiving stereotactic radiosurgery (SRS) for BMs at our institution from 2016 to 2018 without prior brain-directed therapy or small cell histology were eligible. For patients who underwent 2 consecutive courses of SRS, treatment planning MRIs from their initial course were reviewed for radiographic evidence of an emerging metastasis at the same location as metastases treated in their second SRS course. If present, these previously unidentified lesions were contoured and categorized as retrospectively identified metastases (RIMs). RIMs were further subcategorized according to whether they did (+DC) or did not (-DC) meet diagnostic imaging-based criteria to definitively classify them as metastases based upon their appearance in the initial MRI alone. Prospectively identified metastases (PIMs) from these patients, and from patients who only underwent a single course of SRS, were also included. An open-source convolutional neural network architecture was adapted and trained to detect both RIMs and PIMs on thin-slice, contrast-enhanced, spoiled gradient echo MRIs. Patients were randomized into 5 groups: 4 for training/cross-validation and 1 for testing. RESULTS One hundred thirty-five patients with 563 metastases, including 72 RIMS, met criteria. For the test group, CAD sensitivity was 94% for PIMs, 80% for +DC RIMs, and 79% for PIMs and +DC RIMs with diameter <3 mm, with a median of 2 false positives per patient and a Dice coefficient of 0.79. CONCLUSIONS Our CAD model, trained on a novel data set and using a single common MR sequence, demonstrated high sensitivity and specificity overall, outperforming published CAD results for small metastases and RIMs - the lesion types most in need of human performance augmentation.
Collapse
|
8
|
Application of artificial intelligence to stereotactic radiosurgery for intracranial lesions: detection, segmentation, and outcome prediction. J Neurooncol 2023; 161:441-450. [PMID: 36635582 DOI: 10.1007/s11060-022-04234-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 12/30/2022] [Indexed: 01/14/2023]
Abstract
BACKGROUND Rapid evolution of artificial intelligence (AI) prompted its wide application in healthcare systems. Stereotactic radiosurgery served as a good candidate for AI model development and achieved encouraging result in recent years. This article aimed at demonstrating current AI application in radiosurgery. METHODS Literatures published in PubMed during 2010-2022, discussing AI application in stereotactic radiosurgery were reviewed. RESULTS AI algorithms, especially machine learning/deep learning models, have been administered to different aspect of stereotactic radiosurgery. Spontaneous tumor detection and automated lesion delineation or segmentation were two of the promising application, which could be further extended to longitudinal treatment follow-up. Outcome prediction utilized machine learning algorithms with radiomic-based analysis was another well-established application. CONCLUSIONS Stereotactic radiosurgery has taken a lead role in AI development. Current achievement, limitation, and further investigation was summarized in this article.
Collapse
|
9
|
Deep learning-based detection algorithm for brain metastases on black blood imaging. Sci Rep 2022; 12:19503. [PMID: 36376364 PMCID: PMC9663732 DOI: 10.1038/s41598-022-23687-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 11/03/2022] [Indexed: 11/16/2022] Open
Abstract
Brain metastases (BM) are the most common intracranial tumors, and their prevalence is increasing. High-resolution black-blood (BB) imaging was used to complement the conventional contrast-enhanced 3D gradient-echo imaging to detect BM. In this study, we propose an efficient deep learning algorithm (DLA) for BM detection in BB imaging with contrast enhancement scans, and assess the efficacy of an automatic detection algorithm for BM. A total of 113 BM participants with 585 metastases were included in the training cohort for five-fold cross-validation. The You Only Look Once (YOLO) V2 network was trained with 3D BB sampling perfection with application-optimized contrasts using different flip angle evolution (SPACE) images to investigate the BM detection. For the observer performance, two board-certified radiologists and two second-year radiology residents detected the BM and recorded the reading time. For the training cohort, the overall performance of the five-fold cross-validation was 87.95%, 24.82%, 19.35%, 14.48, and 18.40 for sensitivity, precision, F1-Score, the false positive average for the BM dataset, and the false positive average for the normal individual dataset, respectively. For the comparison of reading time with and without DLA, the average reading time was reduced by 20.86% in the range of 15.22-25.77%. The proposed method has the potential to detect BM with a high sensitivity and has a limited number of false positives using BB imaging.
Collapse
|
10
|
Yin S, Luo X, Yang Y, Shao Y, Ma L, Lin C, Yang Q, Wang D, Luo Y, Mai Z, Fan W, Zheng D, Li J, Cheng F, Zhang Y, Zhong X, Shen F, Shao G, Wu J, Sun Y, Luo H, Li C, Gao Y, Shen D, Zhang R, Xie C. OUP accepted manuscript. Neuro Oncol 2022; 24:1559-1570. [PMID: 35100427 PMCID: PMC9435500 DOI: 10.1093/neuonc/noac025] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Affiliation(s)
| | | | | | - Ying Shao
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Lidi Ma
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Cuiping Lin
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Qiuxia Yang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Deling Wang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yingwei Luo
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Zhijun Mai
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Weixiong Fan
- Department of Magnetic Resonance, Guangdong Provincial Key Laboratory of Precision Medicine and Clinical Translational Research of Hakka Population, Meizhou People’s Hospital, Meizhou, China
| | - Dechun Zheng
- Department of Radiology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, Fujian Province, China
| | - Jianpeng Li
- Department Of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Dongguan, China
| | - Fengyan Cheng
- Department of Magnetic Resonance, Guangdong Provincial Key Laboratory of Precision Medicine and Clinical Translational Research of Hakka Population, Meizhou People’s Hospital, Meizhou, China
| | - Yuhui Zhang
- Department of Magnetic Resonance, Guangdong Provincial Key Laboratory of Precision Medicine and Clinical Translational Research of Hakka Population, Meizhou People’s Hospital, Meizhou, China
| | - Xinwei Zhong
- Department of Magnetic Resonance, Guangdong Provincial Key Laboratory of Precision Medicine and Clinical Translational Research of Hakka Population, Meizhou People’s Hospital, Meizhou, China
| | - Fangmin Shen
- Department of Radiology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, Fujian Province, China
| | - Guohua Shao
- Department Of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Dongguan, China
| | - Jiahao Wu
- Department Of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Dongguan, China
| | - Ying Sun
- Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Huiyan Luo
- Department of Medical Oncology, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Chaofeng Li
- Department of Artificial Intelligence Laboratory, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yaozong Gao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Rong Zhang
- Rong Zhang, PhD, The Department of Radiology, 651 Dongfeng Road East, Yuexiu District, Guanzhou 510060, P.R. China ()
| | - Chuanmiao Xie
- Corresponding Authors: Chuanmiao Xie, PhD, The Department of Radiology, 651 Dongfeng Road East, Yuexiu District, Guanzhou 510060, P.R. China ()
| |
Collapse
|
11
|
Kumala Wardani B, Yueniwati Y, Naba A. The Application of Image Segmentation to Determine the Ratio of Peritumoral Edema Area to Tumor Area on Primary Malignant Brain Tumor and Metastases through Conventional Magnetic Resonance Imaging. Open Access Maced J Med Sci 2022. [DOI: 10.3889/oamjms.2022.7777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
BACKGROUND: Primary malignant brain tumor and metastases on the brain have a similar pattern in conventional Magnetic Resonance Imaging (MRI), even though both require entirely different treatment and management. The pathophysiological difference of peritumoral edema can help to distinguish the case of primary malignant brain tumor and brain metastases.
AIM: This study aimed to analyze the ratio of the area of peritumoral edema to the tumor using Otsu’s method of image segmentation technique with a user-friendly Graphical User Interface (GUI).
METODS: Data was prepared by obtaining the examination results of Anatomical Pathology and MRI imaging. The area of peritumoral edema was identified from MRI image segmentation with T2/FLAIR sequence. While the area of tumor was identified using MRI image segmentation with T1 sequence.
RESULTS: The Mann-Whitney test was employed to analyze the ratio of the area of peritumoral edema to tumor on both groups. Data testing produced a significance level of 0.013 (p < 0.05) with a median value (Nmax-Nmin) of 1.14 (3.31-0.08) for the primary malignant brain tumor group and a median value (Nmax-Nmin) of 1.17 (10.30-0.90) for the brain metastases group.
CONCLUSIONS: There was a significant difference in the ratio of the area of peritumoral edema to the area of tumor from both groups, in which brain metastases have a greater value than the primary malignant brain tumor.
Collapse
|
12
|
Deep Learning Framework to Detect Ischemic Stroke Lesion in Brain MRI Slices of Flair/DW/T1 Modalities. Symmetry (Basel) 2021. [DOI: 10.3390/sym13112080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Ischemic stroke lesion (ISL) is a brain abnormality. Studies proved that early detection and treatment could reduce the disease impact. This research aimed to develop a deep learning (DL) framework to detect the ISL in multi-modality magnetic resonance image (MRI) slices. It proposed a convolutional neural network (CNN)-supported segmentation and classification to execute a consistent disease detection framework. The developed framework consisted of the following phases; (i) visual geometry group (VGG) developed VGG16 scheme supported SegNet (VGG-SegNet)-based ISL mining, (ii) handcrafted feature extraction, (iii) deep feature extraction using the chosen DL scheme, (iv) feature ranking and serial feature concatenation, and (v) classification using binary classifiers. Fivefold cross-validation was employed in this work, and the best feature was selected as the final result. The attained results were separately examined for (i) segmentation; (ii) deep-feature-based classification, and (iii) concatenated feature-based classification. The experimental investigation is presented using the Ischemic Stroke Lesion Segmentation (ISLES2015) database. The attained result confirms that the proposed ISL detection framework gives better segmentation and classification results. The VGG16 scheme helped to obtain a better result with deep features (accuracy > 97%) and concatenated features (accuracy > 98%).
Collapse
|
13
|
Cho J, Kim YJ, Sunwoo L, Lee GP, Nguyen TQ, Cho SJ, Baik SH, Bae YJ, Choi BS, Jung C, Sohn CH, Han JH, Kim CY, Kim KG, Kim JH. Deep Learning-Based Computer-Aided Detection System for Automated Treatment Response Assessment of Brain Metastases on 3D MRI. Front Oncol 2021; 11:739639. [PMID: 34778056 PMCID: PMC8579083 DOI: 10.3389/fonc.2021.739639] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Accepted: 09/30/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Although accurate treatment response assessment for brain metastases (BMs) is crucial, it is highly labor intensive. This retrospective study aimed to develop a computer-aided detection (CAD) system for automated BM detection and treatment response evaluation using deep learning. METHODS We included 214 consecutive MRI examinations of 147 patients with BM obtained between January 2015 and August 2016. These were divided into the training (174 MR images from 127 patients) and test datasets according to temporal separation (temporal test set #1; 40 MR images from 20 patients). For external validation, 24 patients with BM and 11 patients without BM from other institutions were included (geographic test set). In addition, we included 12 MRIs from BM patients obtained between August 2017 and March 2020 (temporal test set #2). Detection sensitivity, dice similarity coefficient (DSC) for segmentation, and agreements in one-dimensional and volumetric Response Assessment in Neuro-Oncology Brain Metastases (RANO-BM) criteria between CAD and radiologists were assessed. RESULTS In the temporal test set #1, the sensitivity was 75.1% (95% confidence interval [CI]: 69.6%, 79.9%), mean DSC was 0.69 ± 0.22, and false-positive (FP) rate per scan was 0.8 for BM ≥ 5 mm. Agreements in the RANO-BM criteria were moderate (κ, 0.52) and substantial (κ, 0.68) for one-dimensional and volumetric, respectively. In the geographic test set, sensitivity was 87.7% (95% CI: 77.2%, 94.5%), mean DSC was 0.68 ± 0.20, and FP rate per scan was 1.9 for BM ≥ 5 mm. In the temporal test set #2, sensitivity was 94.7% (95% CI: 74.0%, 99.9%), mean DSC was 0.82 ± 0.20, and FP per scan was 0.5 (6/12) for BM ≥ 5 mm. CONCLUSIONS Our CAD showed potential for automated treatment response assessment of BM ≥ 5 mm.
Collapse
Affiliation(s)
- Jungheum Cho
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Young Jae Kim
- Department of Biomedical Engineering, Gachon University Gil Medical Center, Incheon, South Korea
| | - Leonard Sunwoo
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
- Center for Artificial Intelligence in Healthcare, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Gi Pyo Lee
- Department of Biomedical Engineering, Gachon University Gil Medical Center, Incheon, South Korea
| | - Toan Quang Nguyen
- Department of Radiology, Vietnam National Cancer Hospital, Hanoi, Vietnam
| | - Se Jin Cho
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Sung Hyun Baik
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Yun Jung Bae
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Byung Se Choi
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Cheolkyu Jung
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Chul-Ho Sohn
- Department of Radiology, Seoul National University Hospital, Seoul, South Korea
| | - Jung-Ho Han
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Chae-Yong Kim
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Kwang Gi Kim
- Department of Biomedical Engineering, Gachon University Gil Medical Center, Incheon, South Korea
| | - Jae Hyoung Kim
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| |
Collapse
|
14
|
Williams S, Layard Horsfall H, Funnell JP, Hanrahan JG, Khan DZ, Muirhead W, Stoyanov D, Marcus HJ. Artificial Intelligence in Brain Tumour Surgery-An Emerging Paradigm. Cancers (Basel) 2021; 13:cancers13195010. [PMID: 34638495 PMCID: PMC8508169 DOI: 10.3390/cancers13195010] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 10/02/2021] [Accepted: 10/03/2021] [Indexed: 01/01/2023] Open
Abstract
Artificial intelligence (AI) platforms have the potential to cause a paradigm shift in brain tumour surgery. Brain tumour surgery augmented with AI can result in safer and more effective treatment. In this review article, we explore the current and future role of AI in patients undergoing brain tumour surgery, including aiding diagnosis, optimising the surgical plan, providing support during the operation, and better predicting the prognosis. Finally, we discuss barriers to the successful clinical implementation, the ethical concerns, and we provide our perspective on how the field could be advanced.
Collapse
Affiliation(s)
- Simon Williams
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
- Correspondence:
| | - Hugo Layard Horsfall
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Jonathan P. Funnell
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - John G. Hanrahan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Danyal Z. Khan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - William Muirhead
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Danail Stoyanov
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| | - Hani J. Marcus
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK; (H.L.H.); (J.P.F.); (J.G.H.); (D.Z.K.); (W.M.); (H.J.M.)
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences (WEISS), London W1W 7TY, UK;
| |
Collapse
|
15
|
Khan AA, Ibad H, Ahmed KS, Hoodbhoy Z, Shamim SM. Deep learning applications in neuro-oncology. Surg Neurol Int 2021; 12:435. [PMID: 34513198 PMCID: PMC8422419 DOI: 10.25259/sni_433_2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 07/30/2021] [Indexed: 11/04/2022] Open
Abstract
Deep learning (DL) is a relatively newer subdomain of machine learning (ML) with incredible potential for certain applications in the medical field. Given recent advances in its use in neuro-oncology, its role in diagnosing, prognosticating, and managing the care of cancer patients has been the subject of many research studies. The gamut of studies has shown that the landscape of algorithmic methods is constantly improving with each iteration from its inception. With the increase in the availability of high-quality data, more training sets will allow for higher fidelity models. However, logistical and ethical concerns over a prospective trial comparing prognostic abilities of DL and physicians severely limit the ability of this technology to be widely adopted. One of the medical tenets is judgment, a facet of medical decision making in DL that is often missing because of its inherent nature as a "black box." A natural distrust for newer technology, combined with a lack of autonomy that is normally expected in our current medical practices, is just one of several important limitations in implementation. In our review, we will first define and outline the different types of artificial intelligence (AI) as well as the role of AI in the current advances of clinical medicine. We briefly highlight several of the salient studies using different methods of DL in the realm of neuroradiology and summarize the key findings and challenges faced when using this nascent technology, particularly ethical challenges that could be faced by users of DL.
Collapse
Affiliation(s)
- Adnan A Khan
- Medical College, Aga Khan University, Karachi, Sindh, Pakistan
| | - Hamza Ibad
- Medical College, Aga Khan University, Karachi, Sindh, Pakistan
| | | | - Zahra Hoodbhoy
- Department of Pediatrics, Aga Khan University, Karachi, Sindh, Pakistan
| | - Shahzad M Shamim
- Department of Neurosurgery, Aga Khan University, Karachi, Sindh, Pakistan
| |
Collapse
|
16
|
Hsu DG, Ballangrud Å, Shamseddine A, Deasy JO, Veeraraghavan H, Cervino L, Beal K, Aristophanous M. Automatic segmentation of brain metastases using T1 magnetic resonance and computed tomography images. Phys Med Biol 2021; 66. [PMID: 34315148 DOI: 10.1088/1361-6560/ac1835] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 07/27/2021] [Indexed: 12/26/2022]
Abstract
An increasing number of patients with multiple brain metastases are being treated with stereotactic radiosurgery (SRS). Manually identifying and contouring all metastatic lesions is difficult and time-consuming, and a potential source of variability. Hence, we developed a 3D deep learning approach for segmenting brain metastases on MR and CT images. Five-hundred eleven patients treated with SRS were retrospectively identified for this study. Prior to radiotherapy, the patients were imaged with 3D T1 spoiled-gradient MR post-Gd (T1 + C) and contrast-enhanced CT (CECT), which were co-registered by a treatment planner. The gross tumor volume contours, authored by the attending radiation oncologist, were taken as the ground truth. There were 3 ± 4 metastases per patient, with volume up to 57 ml. We produced a multi-stage model that automatically performs brain extraction, followed by detection and segmentation of brain metastases using co-registered T1 + C and CECT. Augmented data from 80% of these patients were used to train modified 3D V-Net convolutional neural networks for this task. We combined a normalized boundary loss function with soft Dice loss to improve the model optimization, and employed gradient accumulation to stabilize the training. The average Dice similarity coefficient (DSC) for brain extraction was 0.975 ± 0.002 (95% CI). The detection sensitivity per metastasis was 90% (329/367), with moderate dependence on metastasis size. Averaged across 102 test patients, our approach had metastasis detection sensitivity 95 ± 3%, 2.4 ± 0.5 false positives, DSC of 0.76 ± 0.03, and 95th-percentile Hausdorff distance of 2.5 ± 0.3 mm (95% CIs). The volumes of automatic and manual segmentations were strongly correlated for metastases of volume up to 20 ml (r=0.97,p<0.001). This work expounds a fully 3D deep learning approach capable of automatically detecting and segmenting brain metastases using co-registered T1 + C and CECT.
Collapse
Affiliation(s)
- Dylan G Hsu
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Åse Ballangrud
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Achraf Shamseddine
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Joseph O Deasy
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Laura Cervino
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Kathryn Beal
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| | - Michalis Aristophanous
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, United States of America
| |
Collapse
|
17
|
Koley S, Dutta PK, Aganj I. Radius-optimized efficient template matching for lesion detection from brain images. Sci Rep 2021; 11:11586. [PMID: 34078935 PMCID: PMC8172536 DOI: 10.1038/s41598-021-90147-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 05/07/2021] [Indexed: 11/09/2022] Open
Abstract
Computer-aided detection of brain lesions from volumetric magnetic resonance imaging (MRI) is in demand for fast and automatic diagnosis of neural diseases. The template-matching technique can provide satisfactory outcome for automatic localization of brain lesions; however, finding the optimal template size that maximizes similarity of the template and the lesion remains challenging. This increases the complexity of the algorithm and the requirement for computational resources, while processing large MRI volumes with three-dimensional (3D) templates. Hence, reducing the computational complexity of template matching is needed. In this paper, we first propose a mathematical framework for computing the normalized cross-correlation coefficient (NCCC) as the similarity measure between the MRI volume and approximated 3D Gaussian template with linear time complexity, [Formula: see text], as opposed to the conventional fast Fourier transform (FFT) based approach with the complexity [Formula: see text], where [Formula: see text] is the number of voxels in the image and [Formula: see text] is the number of tried template radii. We then propose a mathematical formulation to analytically estimate the optimal template radius for each voxel in the image and compute the NCCC with the location-dependent optimal radius, reducing the complexity to [Formula: see text]. We test our methods on one synthetic and two real multiple-sclerosis databases, and compare their performances in lesion detection with FFT and a state-of-the-art lesion prediction algorithm. We demonstrate through our experiments the efficiency of the proposed methods for brain lesion detection and their comparable performance with existing techniques.
Collapse
Affiliation(s)
- Subhranil Koley
- School of Medical Science and Technology, Indian Institute of Technology Kharagpur, Kharagpur, WB, 721302, India.
| | - Pranab K Dutta
- Electrical Engineering Department, Indian Institute of Technology Kharagpur, Kharagpur, WB, 721302, India
| | - Iman Aganj
- Athinoula A. Martinos Center for Biomedical Imaging, Radiology Department, Massachusetts General Hospital, Harvard Medical School, 149 13th St., Suite 2301, Charlestown, MA, 02129, USA.,Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St., Cambridge, MA, 02139, USA
| |
Collapse
|
18
|
Contrast-Enhanced Black Blood MRI Sequence Is Superior to Conventional T1 Sequence in Automated Detection of Brain Metastases by Convolutional Neural Networks. Diagnostics (Basel) 2021; 11:diagnostics11061016. [PMID: 34206103 PMCID: PMC8230135 DOI: 10.3390/diagnostics11061016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 05/18/2021] [Accepted: 05/28/2021] [Indexed: 12/11/2022] Open
Abstract
Background: in magnetic resonance imaging (MRI), automated detection of brain metastases with convolutional neural networks (CNN) represents an extraordinary challenge due to small lesions sometimes posing as brain vessels as well as other confounders. Literature reporting high false positive rates when using conventional contrast enhanced (CE) T1 sequences questions their usefulness in clinical routine. CE black blood (BB) sequences may overcome these limitations by suppressing contrast-enhanced structures, thus facilitating lesion detection. This study compared CNN performance in conventional CE T1 and BB sequences and tested for objective improvement of brain lesion detection. Methods: we included a subgroup of 127 consecutive patients, receiving both CE T1 and BB sequences, referred for MRI concerning metastatic spread to the brain. A pretrained CNN was retrained with a customized monolayer classifier using either T1 or BB scans of brain lesions. Results: CE T1 imaging-based training resulted in an internal validation accuracy of 85.5% vs. 92.3% in BB imaging (p < 0.01). In holdout validation analysis, T1 image-based prediction presented poor specificity and sensitivity with an AUC of 0.53 compared to 0.87 in BB-imaging-based prediction. Conclusions: detection of brain lesions with CNN, BB-MRI imaging represents a highly effective input type when compared to conventional CE T1-MRI imaging. Use of BB-MRI can overcome the current limitations for automated brain lesion detection and the objectively excellent performance of our CNN suggests routine usage of BB sequences for radiological analysis.
Collapse
|
19
|
Deike-Hofmann K, Dancs D, Paech D, Schlemmer HP, Maier-Hein K, Bäumer P, Radbruch A, Götz M. Pre-examinations Improve Automated Metastases Detection on Cranial MRI. Invest Radiol 2021; 56:320-327. [PMID: 33259442 DOI: 10.1097/rli.0000000000000745] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
MATERIALS AND METHODS Our local ethics committee approved this retrospective monocenter study.First, a dual-time approach was assessed, for which the CNN was provided sequences of the MRI that initially depicted new MM (diagnosis MRI) as well as of a prediagnosis MRI: inclusion of only contrast-enhanced T1-weighted images (CNNdual_ce) was compared with inclusion of also the native T1-weighted images, T2-weighted images, and FLAIR sequences of both time points (CNNdual_all).Second, results were compared with the corresponding single time approaches, in which the CNN was provided exclusively the respective sequences of the diagnosis MRI.Casewise diagnostic performance parameters were calculated from 5-fold cross-validation. RESULTS In total, 94 cases with 494 MMs were included. Overall, the highest diagnostic performance was achieved by inclusion of only the contrast-enhanced T1-weighted images of the diagnosis and of a prediagnosis MRI (CNNdual_ce, sensitivity = 73%, PPV = 25%, F1-score = 36%). Using exclusively contrast-enhanced T1-weighted images as input resulted in significantly less false-positives (FPs) compared with inclusion of further sequences beyond contrast-enhanced T1-weighted images (FPs = 5/7 for CNNdual_ce/CNNdual_all, P < 1e-5). Comparison of contrast-enhanced dual and mono time approaches revealed that exclusion of prediagnosis MRI significantly increased FPs (FPs = 5/10 for CNNdual_ce/CNNce, P < 1e-9).Approaches with only native sequences were clearly inferior to CNNs that were provided contrast-enhanced sequences. CONCLUSIONS Automated MM detection on contrast-enhanced T1-weighted images performed with high sensitivity. Frequent FPs due to artifacts and vessels were significantly reduced by additional inclusion of prediagnosis MRI, but not by inclusion of further sequences beyond contrast-enhanced T1-weighted images. Future studies might investigate different change detection architectures for computer-aided detection.
Collapse
Affiliation(s)
| | - Dorottya Dancs
- From the Department of Radiology, German Cancer Research Center, Heidelberg
| | - Daniel Paech
- From the Department of Radiology, German Cancer Research Center, Heidelberg
| | | | - Klaus Maier-Hein
- Department for Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| | - Philipp Bäumer
- From the Department of Radiology, German Cancer Research Center, Heidelberg
| | | | - Michael Götz
- Department for Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| |
Collapse
|
20
|
Rudie JD, Weiss DA, Colby JB, Rauschecker AM, Laguna B, Braunstein S, Sugrue LP, Hess CP, Villanueva-Meyer JE. Three-dimensional U-Net Convolutional Neural Network for Detection and Segmentation of Intracranial Metastases. Radiol Artif Intell 2021; 3:e200204. [PMID: 34136817 PMCID: PMC8204134 DOI: 10.1148/ryai.2021200204] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 02/05/2021] [Accepted: 02/19/2021] [Indexed: 05/05/2023]
Abstract
PURPOSE To develop and validate a neural network for automated detection and segmentation of intracranial metastases on brain MRI studies obtained for stereotactic radiosurgery treatment planning. MATERIALS AND METHODS In this retrospective study, 413 patients (average age, 61 years ± 12 [standard deviation]; 238 women) with a total of 5202 intracranial metastases (median volume, 0.05 cm3; interquartile range, 0.02-0.18 cm3) undergoing stereotactic radiosurgery at one institution were included (January 2017 to February 2020). A total of 563 MRI examinations were performed among the patients, and studies were split into training (n = 413), validation (n = 50), and test (n = 100) datasets. A three-dimensional (3D) U-Net convolutional network was trained and validated on 413 T1 postcontrast or subtraction scans, and several loss functions were evaluated. After model validation, 100 discrete test patients, who underwent imaging after the training and validation patients, were used for final model evaluation. Performance for detection and segmentation of metastases was evaluated using Dice scores, false discovery rates, and false-negative rates, and a comparison with neuroradiologist interrater reliability was performed. RESULTS The median Dice score for segmenting enhancing metastases in the test set was 0.75 (interquartile range, 0.63-0.84). There were strong correlations between manually segmented and predicted metastasis volumes (r = 0.98, P < .001) and between the number of manually segmented and predicted metastases (R = 0.95, P < .001). Higher Dice scores were strongly correlated with larger metastasis volumes on a logarithmically transformed scale (r = 0.71). Sensitivity across the whole test sample was 70.0% overall and 96.4% for metastases larger than 6 mm. There was an average of 0.46 false-positive results per scan, with the positive predictive value being 91.5%. In comparison, the median Dice score between two neuroradiologists was 0.85 (interquartile range, 0.80-0.89), with sensitivity across the test sample being 87.9% overall and 98.4% for metastases larger than 6 mm. CONCLUSION A 3D U-Net-based convolutional neural network was able to segment brain metastases with high accuracy and perform detection at the level of human interrater reliability for metastases larger than 6 mm.Keywords: Adults, Brain/Brain Stem, CNS, Feature detection, MR-Imaging, Neural Networks, Neuro-Oncology, Quantification, Segmentation© RSNA, 2021.
Collapse
Affiliation(s)
- Jeffrey D. Rudie
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| | - David A. Weiss
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| | - John B. Colby
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| | - Andreas M. Rauschecker
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| | - Benjamin Laguna
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| | - Steve Braunstein
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| | - Leo P. Sugrue
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| | - Christopher P. Hess
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| | - Javier E. Villanueva-Meyer
- From the Department of Radiology and Biomedical Imaging, University of California, San Francisco, 513 Parnassus Ave, San Francisco, CA 94143
| |
Collapse
|
21
|
Xue J, Wang B, Ming Y, Liu X, Jiang Z, Wang C, Liu X, Chen L, Qu J, Xu S, Tang X, Mao Y, Liu Y, Li D. Deep learning-based detection and segmentation-assisted management of brain metastases. Neuro Oncol 2021; 22:505-514. [PMID: 31867599 DOI: 10.1093/neuonc/noz234] [Citation(s) in RCA: 71] [Impact Index Per Article: 23.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Three-dimensional T1 magnetization prepared rapid acquisition gradient echo (3D-T1-MPRAGE) is preferred in detecting brain metastases (BM) among MRI. We developed an automatic deep learning-based detection and segmentation method for BM (named BMDS net) on 3D-T1-MPRAGE images and evaluated its performance. METHODS The BMDS net is a cascaded 3D fully convolution network (FCN) to automatically detect and segment BM. In total, 1652 patients with 3D-T1-MPRAGE images from 3 hospitals (n = 1201, 231, and 220, respectively) were retrospectively included. Manual segmentations were obtained by a neuroradiologist and a radiation oncologist in a consensus reading in 3D-T1-MPRAGE images. Sensitivity, specificity, and dice ratio of the segmentation were evaluated. Specificity and sensitivity measure the fractions of relevant segmented voxels. Dice ratio was used to quantitatively measure the overlap between automatic and manual segmentation results. Paired samples t-tests and analysis of variance were employed for statistical analysis. RESULTS The BMDS net can detect all BM, providing a detection result with an accuracy of 100%. Automatic segmentations correlated strongly with manual segmentations through 4-fold cross-validation of the dataset with 1201 patients: the sensitivity was 0.96 ± 0.03 (range, 0.84-0.99), the specificity was 0.99 ± 0.0002 (range, 0.99-1.00), and the dice ratio was 0.85 ± 0.08 (range, 0.62-0.95) for total tumor volume. Similar performances on the other 2 datasets also demonstrate the robustness of BMDS net in correctly detecting and segmenting BM in various settings. CONCLUSIONS The BMDS net yields accurate detection and segmentation of BM automatically and could assist stereotactic radiotherapy management for diagnosis, therapy planning, and follow-up.
Collapse
Affiliation(s)
- Jie Xue
- School of Business, Shandong Normal University, Jinan, China
| | - Bao Wang
- Department of Radiology, Qilu Hospital of Shandong University, Jinan, China
| | - Yang Ming
- Department of Neurosurgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Xuejun Liu
- School of Business, Shandong Normal University, Jinan, China
| | - Zekun Jiang
- Shandong Key Laboratory of Medical Physics and Image Processing, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Chengwei Wang
- Department of Neurosurgery, the Second Hospital of Shandong University, Jinan, China
| | - Xiyu Liu
- Department of Radiology, the Affiliated Hospital of Qingdao University Medical College, Qingdao, China
| | - Ligang Chen
- Department of Neurosurgery, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Jianhua Qu
- School of Business, Shandong Normal University, Jinan, China
| | - Shangchen Xu
- Department of Neurosurgery, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China.,Department of Neurosurgery, Shandong Provincial Hospital Affiliated to Shandong University, Jinan, China
| | - Xuqun Tang
- Department of Neurosurgery, Huashan Hospital Affiliated to Fudan University, Shanghai, China
| | - Ying Mao
- Department of Neurosurgery, Huashan Hospital Affiliated to Fudan University, Shanghai, China
| | - Yingchao Liu
- Department of Neurosurgery, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China.,Department of Neurosurgery, Shandong Provincial Hospital Affiliated to Shandong University, Jinan, China
| | - Dengwang Li
- Shandong Key Laboratory of Medical Physics and Image Processing, School of Physics and Electronics, Shandong Normal University, Jinan, China
| |
Collapse
|
22
|
Cho SJ, Sunwoo L, Baik SH, Bae YJ, Choi BS, Kim JH. Brain metastasis detection using machine learning: a systematic review and meta-analysis. Neuro Oncol 2021; 23:214-225. [PMID: 33075135 PMCID: PMC7906058 DOI: 10.1093/neuonc/noaa232] [Citation(s) in RCA: 61] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Accurate detection of brain metastasis (BM) is important for cancer patients. We aimed to systematically review the performance and quality of machine-learning-based BM detection on MRI in the relevant literature. METHODS A systematic literature search was performed for relevant studies reported before April 27, 2020. We assessed the quality of the studies using modified tailored questionnaires of the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) criteria and the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). Pooled detectability was calculated using an inverse-variance weighting model. RESULTS A total of 12 studies were included, which showed a clear transition from classical machine learning (cML) to deep learning (DL) after 2018. The studies on DL used a larger sample size than those on cML. The cML and DL groups also differed in the composition of the dataset, and technical details such as data augmentation. The pooled proportions of detectability of BM were 88.7% (95% CI, 84-93%) and 90.1% (95% CI, 84-95%) in the cML and DL groups, respectively. The false-positive rate per person was lower in the DL group than the cML group (10 vs 135, P < 0.001). In the patient selection domain of QUADAS-2, three studies (25%) were designated as high risk due to non-consecutive enrollment and arbitrary exclusion of nodules. CONCLUSION A comparable detectability of BM with a low false-positive rate per person was found in the DL group compared with the cML group. Improvements are required in terms of quality and study design.
Collapse
Affiliation(s)
- Se Jin Cho
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Gyeonggi, Republic of Korea
| | - Leonard Sunwoo
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Gyeonggi, Republic of Korea
| | - Sung Hyun Baik
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Gyeonggi, Republic of Korea
| | - Yun Jung Bae
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Gyeonggi, Republic of Korea
| | - Byung Se Choi
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Gyeonggi, Republic of Korea
| | - Jae Hyoung Kim
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Gyeonggi, Republic of Korea
| |
Collapse
|
23
|
Dikici E, Ryu JL, Demirer M, Bigelow M, White RD, Slone W, Erdal BS, Prevedello LM. Automated Brain Metastases Detection Framework for T1-Weighted Contrast-Enhanced 3D MRI. IEEE J Biomed Health Inform 2020; 24:2883-2893. [DOI: 10.1109/jbhi.2020.2982103] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
24
|
Park JE, Kickingereder P, Kim HS. Radiomics and Deep Learning from Research to Clinical Workflow: Neuro-Oncologic Imaging. Korean J Radiol 2020; 21:1126-1137. [PMID: 32729271 PMCID: PMC7458866 DOI: 10.3348/kjr.2019.0847] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2019] [Revised: 03/03/2020] [Accepted: 03/29/2020] [Indexed: 12/29/2022] Open
Abstract
Imaging plays a key role in the management of brain tumors, including the diagnosis, prognosis, and treatment response assessment. Radiomics and deep learning approaches, along with various advanced physiologic imaging parameters, hold great potential for aiding radiological assessments in neuro-oncology. The ongoing development of new technology needs to be validated in clinical trials and incorporated into the clinical workflow. However, none of the potential neuro-oncological applications for radiomics and deep learning has yet been realized in clinical practice. In this review, we summarize the current applications of radiomics and deep learning in neuro-oncology and discuss challenges in relation to evidence-based medicine and reporting guidelines, as well as potential applications in clinical workflows and routine clinical practice.
Collapse
Affiliation(s)
- Ji Eun Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Philipp Kickingereder
- Department of Neuroradiology, University of Heidelberg, Im Neuenheimer Feld, Heidelberg, Germany
| | - Ho Sung Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea.
| |
Collapse
|
25
|
Liu Y. Application of artificial intelligence in clinical non-small cell lung cancer. Artif Intell Cancer 2020; 1:19-30. [DOI: 10.35713/aic.v1.i1.19] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Revised: 06/17/2020] [Accepted: 06/18/2020] [Indexed: 02/06/2023] Open
Abstract
Lung cancer is the most common cause of cancer death in the world. Early diagnosis, screening and precise individualized treatment can significantly reduce the death rate of lung cancer. Artificial intelligence (AI) has been shown to be able to help clinicians make more accurate judgments and decisions in many ways. It has been involved in the screening of lung cancer, the judgment of benign and malignant degree of pulmonary nodules, the classification of histological cancer, the differentiation of histological subtypes, the identification of genomics, the judgment of the effectiveness of treatment and even the prognosis. AI has shown that it can be an excellent assistant for clinicians. This paper reviews the application of AI in the field of non-small cell lung cancer and describes the relevant progress. Although most of the studies to evaluate the clinical application of AI in non-small cell lung cancer have not been repeatable and generalizable, the research results highlight the efforts to promote the clinical application of AI technology and influence the future treatment direction.
Collapse
Affiliation(s)
- Yong Liu
- Department of Thoracic Surgery, The Central Hospital of Wuhan, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430011, Hubei Province, China
| |
Collapse
|
26
|
Park JE, Kim HS. [Current Applications and Future Perspectives of Brain Tumor Imaging]. TAEHAN YONGSANG UIHAKHOE CHI 2020; 81:467-487. [PMID: 36238631 PMCID: PMC9431910 DOI: 10.3348/jksr.2020.81.3.467] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 05/04/2020] [Accepted: 05/07/2020] [Indexed: 11/29/2022]
Abstract
뇌종양의 진단 및 치료 반응 평가의 기본이 되는 영상기법은 해부학적 영상이다. 현재 임상에서 사용 가능한 영상기법들 중 확산 강조 영상 및 관류 영상이 추가적인 정보를 제공하고 있다. 최근에는 종양의 유전체 변이와 이질성 평가가 중요해지면서 라디오믹스와 딥러닝을 이용한 영상분석기법의 임상 응용이 기대되고 있다. 본 종설에서는 뇌종양 영상 임상 적용에서 여전히 중요한 해부학적 영상을 중심으로 한 자기공명영상 촬영 권고안, 최신 영상기법 중 확산 강조 영상 및 관류 영상의 기본 원리, 병태생리학적 배경 및 임상응용, 마지막으로 최근 컴퓨터 기술의 발전으로 많이 연구되고 있는 라디오믹스와 딥러닝의 뇌종양에서의 향후 활용가치에 대해 기술하고자 한다.
Collapse
|
27
|
Radiomics in gliomas: clinical implications of computational modeling and fractal-based analysis. Neuroradiology 2020; 62:771-790. [DOI: 10.1007/s00234-020-02403-1] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 03/10/2020] [Indexed: 12/14/2022]
|
28
|
Zhang M, Young GS, Chen H, Li J, Qin L, McFaline-Figueroa JR, Reardon DA, Cao X, Wu X, Xu X. Deep-Learning Detection of Cancer Metastases to the Brain on MRI. J Magn Reson Imaging 2020; 52:1227-1236. [PMID: 32167652 DOI: 10.1002/jmri.27129] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 02/27/2020] [Accepted: 02/27/2020] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND Approximately one-fourth of all cancer metastases are found in the brain. MRI is the primary technique for detection of brain metastasis, planning of radiotherapy, and the monitoring of treatment response. Progress in tumor treatment now requires detection of new or growing metastases at the small subcentimeter size, when these therapies are most effective. PURPOSE To develop a deep-learning-based approach for finding brain metastasis on MRI. STUDY TYPE Retrospective. SEQUENCE Axial postcontrast 3D T1 -weighted imaging. FIELD STRENGTH 1.5T and 3T. POPULATION A total of 361 scans of 121 patients were used to train and test the Faster region-based convolutional neural network (Faster R-CNN): 1565 lesions in 270 scans of 73 patients for training; 488 lesions in 91 scans of 48 patients for testing. From the 48 outputs of Faster R-CNN, 212 lesions in 46 scans of 18 patients were used for training the RUSBoost algorithm (MatLab) and 276 lesions in 45 scans of 30 patients for testing. ASSESSMENT Two radiologists diagnosed and supervised annotation of metastases on brain MRI as ground truth. This data were used to produce a 2-step pipeline consisting of a Faster R-CNN for detecting abnormal hyperintensity that may represent brain metastasis and a RUSBoost classifier to reduce the number of false-positive foci detected. STATISTICAL TESTS The performance of the algorithm was evaluated by using sensitivity, false-positive rate, and receiver's operating characteristic (ROC) curves. The detection performance was assessed both per-metastases and per-slice. RESULTS Testing on held-out brain MRI data demonstrated 96% sensitivity and 20 false-positive metastases per scan. The results showed an 87.1% sensitivity and 0.24 false-positive metastases per slice. The area under the ROC curve was 0.79. CONCLUSION Our results showed that deep-learning-based computer-aided detection (CAD) had the potential of detecting brain metastases with high sensitivity and reasonable specificity. LEVEL OF EVIDENCE 3 TECHNICAL EFFICACY STAGE: 2 J. Magn. Reson. Imaging 2020;52:1227-1236.
Collapse
Affiliation(s)
- Min Zhang
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Geoffrey S Young
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Huai Chen
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.,Department of Radiology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, Guangdong, China
| | - Jing Li
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.,Department of Radiology, The Affiliated Hospital of Zhengzhou University (Henan Cancer Hospital), Zhengzhou, Henan, China
| | - Lei Qin
- Department of Radiology, Dana Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | | | - David A Reardon
- Department of Radiology, Dana Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Xinhua Cao
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Xian Wu
- Department of Computer Science and Technology, Tsing-hua University, Beijing, China
| | - Xiaoyin Xu
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
29
|
Rundo L, Tangherloni A, Cazzaniga P, Nobile MS, Russo G, Gilardi MC, Vitabile S, Mauri G, Besozzi D, Militello C. A novel framework for MR image segmentation and quantification by using MedGA. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 176:159-172. [PMID: 31200903 DOI: 10.1016/j.cmpb.2019.04.016] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Revised: 04/14/2019] [Accepted: 04/16/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVES Image segmentation represents one of the most challenging issues in medical image analysis to distinguish among different adjacent tissues in a body part. In this context, appropriate image pre-processing tools can improve the result accuracy achieved by computer-assisted segmentation methods. Taking into consideration images with a bimodal intensity distribution, image binarization can be used to classify the input pictorial data into two classes, given a threshold intensity value. Unfortunately, adaptive thresholding techniques for two-class segmentation work properly only for images characterized by bimodal histograms. We aim at overcoming these limitations and automatically determining a suitable optimal threshold for bimodal Magnetic Resonance (MR) images, by designing an intelligent image analysis framework tailored to effectively assist the physicians during their decision-making tasks. METHODS In this work, we present a novel evolutionary framework for image enhancement, automatic global thresholding, and segmentation, which is here applied to different clinical scenarios involving bimodal MR image analysis: (i) uterine fibroid segmentation in MR guided Focused Ultrasound Surgery, and (ii) brain metastatic cancer segmentation in neuro-radiosurgery therapy. Our framework exploits MedGA as a pre-processing stage. MedGA is an image enhancement method based on Genetic Algorithms that improves the threshold selection, obtained by the efficient Iterative Optimal Threshold Selection algorithm, between the underlying sub-distributions in a nearly bimodal histogram. RESULTS The results achieved by the proposed evolutionary framework were quantitatively evaluated, showing that the use of MedGA as a pre-processing stage outperforms the conventional image enhancement methods (i.e., histogram equalization, bi-histogram equalization, Gamma transformation, and sigmoid transformation), in terms of both MR image enhancement and segmentation evaluation metrics. CONCLUSIONS Thanks to this framework, MR image segmentation accuracy is considerably increased, allowing for measurement repeatability in clinical workflows. The proposed computational solution could be well-suited for other clinical contexts requiring MR image analysis and segmentation, aiming at providing useful insights for differential diagnosis and prognosis.
Collapse
Affiliation(s)
- Leonardo Rundo
- Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milan, Italy; Institute of Molecular Bioimaging and Physiology, Italian National Research Council, Cefalù, PA, Italy; Department of Radiology, University of Cambridge, Cambridge, UK; Cancer Research UK Cambridge Centre, Cambridge, UK.
| | - Andrea Tangherloni
- Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milan, Italy; Department of Haematology, University of Cambridge, Cambridge, UK; Wellcome Trust Sanger Institute, Wellcome Trust Genome Campus, Hinxton, UK.
| | - Paolo Cazzaniga
- Department of Human and Social Sciences, University of Bergamo, Bergamo, Italy; SYSBIO.IT Centre of Systems Biology, Milan, Italy.
| | - Marco S Nobile
- Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milan, Italy; SYSBIO.IT Centre of Systems Biology, Milan, Italy.
| | - Giorgio Russo
- Institute of Molecular Bioimaging and Physiology, Italian National Research Council, Cefalù, PA, Italy.
| | - Maria Carla Gilardi
- Institute of Molecular Bioimaging and Physiology, Italian National Research Council, Cefalù, PA, Italy.
| | - Salvatore Vitabile
- Department of Biomedicine, Neuroscience and Advanced Diagnostics, University of Palermo, Palermo, Italy.
| | - Giancarlo Mauri
- Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milan, Italy; SYSBIO.IT Centre of Systems Biology, Milan, Italy.
| | - Daniela Besozzi
- Department of Informatics, Systems and Communication, University of Milano-Bicocca, Milan, Italy.
| | - Carmelo Militello
- Institute of Molecular Bioimaging and Physiology, Italian National Research Council, Cefalù, PA, Italy.
| |
Collapse
|
30
|
Liu Z, Wang S, Dong D, Wei J, Fang C, Zhou X, Sun K, Li L, Li B, Wang M, Tian J. The Applications of Radiomics in Precision Diagnosis and Treatment of Oncology: Opportunities and Challenges. Theranostics 2019; 9:1303-1322. [PMID: 30867832 PMCID: PMC6401507 DOI: 10.7150/thno.30309] [Citation(s) in RCA: 512] [Impact Index Per Article: 102.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2018] [Accepted: 01/10/2019] [Indexed: 12/14/2022] Open
Abstract
Medical imaging can assess the tumor and its environment in their entirety, which makes it suitable for monitoring the temporal and spatial characteristics of the tumor. Progress in computational methods, especially in artificial intelligence for medical image process and analysis, has converted these images into quantitative and minable data associated with clinical events in oncology management. This concept was first described as radiomics in 2012. Since then, computer scientists, radiologists, and oncologists have gravitated towards this new tool and exploited advanced methodologies to mine the information behind medical images. On the basis of a great quantity of radiographic images and novel computational technologies, researchers developed and validated radiomic models that may improve the accuracy of diagnoses and therapy response assessments. Here, we review the recent methodological developments in radiomics, including data acquisition, tumor segmentation, feature extraction, and modelling, as well as the rapidly developing deep learning technology. Moreover, we outline the main applications of radiomics in diagnosis, treatment planning and evaluations in the field of oncology with the aim of developing quantitative and personalized medicine. Finally, we discuss the challenges in the field of radiomics and the scope and clinical applicability of these methods.
Collapse
Affiliation(s)
- Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
- University of Chinese Academy of Sciences, Beijing, 100080, China
| | - Shuo Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
- University of Chinese Academy of Sciences, Beijing, 100080, China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
- University of Chinese Academy of Sciences, Beijing, 100080, China
| | - Jingwei Wei
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
- University of Chinese Academy of Sciences, Beijing, 100080, China
| | - Cheng Fang
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Xuezhi Zhou
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi, 710126, China
| | - Kai Sun
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi, 710126, China
| | - Longfei Li
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
- Collaborative Innovation Center for Internet Healthcare, Zhengzhou University, Zhengzhou, Henan, 450052, China
| | - Bo Li
- Department of Hepatobiliary Surgery, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, 646000, China
| | - Meiyun Wang
- Department of Radiology, Henan Provincial People's Hospital & the People's Hospital of Zhengzhou University, Zhengzhou, Henan, 450003, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, 100190, China
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi, 710126, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, 100191, China
| |
Collapse
|
31
|
Rudie JD, Rauschecker AM, Bryan RN, Davatzikos C, Mohan S. Emerging Applications of Artificial Intelligence in Neuro-Oncology. Radiology 2019; 290:607-618. [PMID: 30667332 DOI: 10.1148/radiol.2018181928] [Citation(s) in RCA: 142] [Impact Index Per Article: 28.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Due to the exponential growth of computational algorithms, artificial intelligence (AI) methods are poised to improve the precision of diagnostic and therapeutic methods in medicine. The field of radiomics in neuro-oncology has been and will likely continue to be at the forefront of this revolution. A variety of AI methods applied to conventional and advanced neuro-oncology MRI data can already delineate infiltrating margins of diffuse gliomas, differentiate pseudoprogression from true progression, and predict recurrence and survival better than methods used in daily clinical practice. Radiogenomics will also advance our understanding of cancer biology, allowing noninvasive sampling of the molecular environment with high spatial resolution and providing a systems-level understanding of underlying heterogeneous cellular and molecular processes. By providing in vivo markers of spatial and molecular heterogeneity, these AI-based radiomic and radiogenomic tools have the potential to stratify patients into more precise initial diagnostic and therapeutic pathways and enable better dynamic treatment monitoring in this era of personalized medicine. Although substantial challenges remain, radiologic practice is set to change considerably as AI technology is further developed and validated for clinical use.
Collapse
Affiliation(s)
- Jeffrey D Rudie
- From the Department of Radiology, Division of Neuroradiology, Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce St, Philadelphia, PA 19104 (J.D.R., C.D., S.M.); Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, Calif (A.M.R.); and Department of Diagnostic Medicine, Dell Medical School, University of Texas, Austin, Tex (R.N.B.)
| | - Andreas M Rauschecker
- From the Department of Radiology, Division of Neuroradiology, Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce St, Philadelphia, PA 19104 (J.D.R., C.D., S.M.); Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, Calif (A.M.R.); and Department of Diagnostic Medicine, Dell Medical School, University of Texas, Austin, Tex (R.N.B.)
| | - R Nick Bryan
- From the Department of Radiology, Division of Neuroradiology, Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce St, Philadelphia, PA 19104 (J.D.R., C.D., S.M.); Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, Calif (A.M.R.); and Department of Diagnostic Medicine, Dell Medical School, University of Texas, Austin, Tex (R.N.B.)
| | - Christos Davatzikos
- From the Department of Radiology, Division of Neuroradiology, Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce St, Philadelphia, PA 19104 (J.D.R., C.D., S.M.); Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, Calif (A.M.R.); and Department of Diagnostic Medicine, Dell Medical School, University of Texas, Austin, Tex (R.N.B.)
| | - Suyash Mohan
- From the Department of Radiology, Division of Neuroradiology, Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce St, Philadelphia, PA 19104 (J.D.R., C.D., S.M.); Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, Calif (A.M.R.); and Department of Diagnostic Medicine, Dell Medical School, University of Texas, Austin, Tex (R.N.B.)
| |
Collapse
|
32
|
Computer-aided diagnosis of cavernous malformations in brain MR images. Comput Med Imaging Graph 2018; 66:115-123. [PMID: 29609039 DOI: 10.1016/j.compmedimag.2018.03.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Revised: 02/07/2018] [Accepted: 03/19/2018] [Indexed: 11/21/2022]
Abstract
Cavernous malformation or cavernoma is one of the most common epileptogenic lesions. It is a type of brain vessel abnormality that can cause serious symptoms such as seizures, intracerebral hemorrhage, and various neurological disorders. Manual detection of cavernomas by physicians in a large set of brain MRI slices is a time-consuming and labor-intensive task and often delays diagnosis. In this paper, we propose a computer-aided diagnosis (CAD) system for cavernomas based on T2-weighted axial plane MRI image analysis. The proposed technique first extracts the brain area based on atlas registration and active contour model, and then performs template matching to obtain candidate cavernoma regions. Texture, the histogram of oriented gradients and local binary pattern features of each candidate region are calculated, and principal component analysis is applied to reduce the feature dimensionality. Support vector machines (SVMs) are finally used to classify each region into cavernoma or non-cavernoma so that most of the false positives (obtained by template matching) are eliminated. The performance of the proposed CAD system is evaluated and experimental results show that it provides superior performance in cavernoma detection compared to existing techniques.
Collapse
|
33
|
Charron O, Lallement A, Jarnet D, Noblet V, Clavier JB, Meyer P. Automatic detection and segmentation of brain metastases on multimodal MR images with a deep convolutional neural network. Comput Biol Med 2018; 95:43-54. [PMID: 29455079 DOI: 10.1016/j.compbiomed.2018.02.004] [Citation(s) in RCA: 148] [Impact Index Per Article: 24.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2017] [Revised: 02/06/2018] [Accepted: 02/07/2018] [Indexed: 02/04/2023]
Abstract
Stereotactic treatments are today the reference techniques for the irradiation of brain metastases in radiotherapy. The dose per fraction is very high, and delivered in small volumes (diameter <1 cm). As part of these treatments, effective detection and precise segmentation of lesions are imperative. Many methods based on deep-learning approaches have been developed for the automatic segmentation of gliomas, but very little for that of brain metastases. We adapted an existing 3D convolutional neural network (DeepMedic) to detect and segment brain metastases on MRI. At first, we sought to adapt the network parameters to brain metastases. We then explored the single or combined use of different MRI modalities, by evaluating network performance in terms of detection and segmentation. We also studied the interest of increasing the database with virtual patients or of using an additional database in which the active parts of the metastases are separated from the necrotic parts. Our results indicated that a deep network approach is promising for the detection and the segmentation of brain metastases on multimodal MRI.
Collapse
Affiliation(s)
- Odelin Charron
- Department of Medical Physics, Paul Strauss Center, Strasbourg, France
| | | | - Delphine Jarnet
- Department of Medical Physics, Paul Strauss Center, Strasbourg, France
| | | | | | - Philippe Meyer
- Department of Medical Physics, Paul Strauss Center, Strasbourg, France; ICube-UMR 7357, Strasbourg, France.
| |
Collapse
|
34
|
Zhou M, Scott J, Chaudhury B, Hall L, Goldgof D, Yeom KW, Iv M, Ou Y, Kalpathy-Cramer J, Napel S, Gillies R, Gevaert O, Gatenby R. Radiomics in Brain Tumor: Image Assessment, Quantitative Feature Descriptors, and Machine-Learning Approaches. AJNR Am J Neuroradiol 2018; 39:208-216. [PMID: 28982791 PMCID: PMC5812810 DOI: 10.3174/ajnr.a5391] [Citation(s) in RCA: 218] [Impact Index Per Article: 36.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Radiomics describes a broad set of computational methods that extract quantitative features from radiographic images. The resulting features can be used to inform imaging diagnosis, prognosis, and therapy response in oncology. However, major challenges remain for methodologic developments to optimize feature extraction and provide rapid information flow in clinical settings. Equally important, to be clinically useful, predictive radiomic properties must be clearly linked to meaningful biologic characteristics and qualitative imaging properties familiar to radiologists. Here we use a cross-disciplinary approach to highlight studies in radiomics. We review brain tumor radiologic studies (eg, imaging interpretation) through computational models (eg, computer vision and machine learning) that provide novel clinical insights. We outline current quantitative image feature extraction and prediction strategies with different levels of available clinical classes for supporting clinical decision-making. We further discuss machine-learning challenges and data opportunities to advance radiomic studies.
Collapse
Affiliation(s)
- M Zhou
- From the Stanford Center for Biomedical Informatic Research (M.Z., O.G.)
| | - J Scott
- Department of Radiology (J.S., B.C., S.N., R. Gillies, R. Gatenby), Moffitt Cancer Research Center, Tampa, Florida
| | - B Chaudhury
- Department of Radiology (J.S., B.C., S.N., R. Gillies, R. Gatenby), Moffitt Cancer Research Center, Tampa, Florida
| | - L Hall
- Department of Computer Science and Engineering (L.H., D.G.), University of South Florida, Tampa, Florida
| | - D Goldgof
- Department of Computer Science and Engineering (L.H., D.G.), University of South Florida, Tampa, Florida
| | - K W Yeom
- Department of Radiology (K.W.Y., M.I.), Stanford University, Stanford, California
| | - M Iv
- Department of Radiology (K.W.Y., M.I.), Stanford University, Stanford, California
| | - Y Ou
- Department of Radiology (Y.O., J.K.-C.), Massachusetts General Hospital, Boston, Massachusetts
| | - J Kalpathy-Cramer
- Department of Radiology (Y.O., J.K.-C.), Massachusetts General Hospital, Boston, Massachusetts
| | - S Napel
- Department of Radiology (J.S., B.C., S.N., R. Gillies, R. Gatenby), Moffitt Cancer Research Center, Tampa, Florida
| | - R Gillies
- Department of Radiology (J.S., B.C., S.N., R. Gillies, R. Gatenby), Moffitt Cancer Research Center, Tampa, Florida
| | - O Gevaert
- From the Stanford Center for Biomedical Informatic Research (M.Z., O.G.)
| | - R Gatenby
- Department of Radiology (J.S., B.C., S.N., R. Gillies, R. Gatenby), Moffitt Cancer Research Center, Tampa, Florida
| |
Collapse
|
35
|
Abstract
Magnetic resonance imaging (MRI) is the cornerstone for evaluating patients with brain masses such as primary and metastatic tumors. Important challenges in effectively detecting and diagnosing brain metastases and in accurately characterizing their subsequent response to treatment remain. These difficulties include discriminating metastases from potential mimics such as primary brain tumors and infection, detecting small metastases, and differentiating treatment response from tumor recurrence and progression. Optimal patient management could be benefited by improved and well-validated prognostic and predictive imaging markers, as well as early response markers to identify successful treatment prior to changes in tumor size. To address these fundamental needs, newer MRI techniques including diffusion and perfusion imaging, MR spectroscopy, and positron emission tomography (PET) tracers beyond traditionally used 18-fluorodeoxyglucose are the subject of extensive ongoing investigations, with several promising avenues of added value already identified. These newer techniques provide a wealth of physiologic and metabolic information that may supplement standard MR evaluation, by providing the ability to monitor and characterize cellularity, angiogenesis, perfusion, pH, hypoxia, metabolite concentrations, and other critical features of malignancy. This chapter reviews standard and advanced imaging of brain metastases provided by computed tomography, MRI, and amino acid PET, focusing on potential biomarkers that can serve as problem-solving tools in the clinical management of patients with brain metastases.
Collapse
Affiliation(s)
- Whitney B Pope
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, United States.
| |
Collapse
|
36
|
O'Dell WG, Gormaley AK, Prida DA. Validation of the Gatortail method for accurate sizing of pulmonary vessels from 3D medical images. Med Phys 2017; 44:6314-6328. [PMID: 28905390 DOI: 10.1002/mp.12580] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2016] [Revised: 08/29/2017] [Accepted: 09/01/2017] [Indexed: 01/19/2023] Open
Abstract
PURPOSE Detailed characterization of changes in vessel size is crucial for the diagnosis and management of a variety of vascular diseases. Because clinical measurement of vessel size is typically dependent on the radiologist's subjective interpretation of the vessel borders, it is often prone to high inter- and intra-user variability. Automatic methods of vessel sizing have been developed for two-dimensional images but a fully three-dimensional (3D) method suitable for vessel sizing from volumetric X-ray computed tomography (CT) or magnetic resonance imaging has heretofore not been demonstrated and validated robustly. METHODS In this paper, we refined and objectively validated Gatortail, a method that creates a mathematical geometric 3D model of each branch in a vascular tree, simulates the appearance of the virtual vascular tree in a 3D CT image, and uses the similarity of the simulated image to a patient's CT scan to drive the optimization of the model parameters, including vessel size, to match that of the patient. The method was validated with a 2-dimensional virtual tree structure under deformation, and with a realistic 3D-printed vascular phantom in which the diameter of 64 branches were manually measured 3 times each. The phantom was then scanned on a conventional clinical CT imaging system and the images processed with the in-house software to automatically segment and mathematically model the vascular tree, label each branch, and perform the Gatortail optimization of branch size and trajectory. Previously proposed methods of vessel sizing using matched Gaussian filters and tubularity metrics were also tested. The Gatortail method was then demonstrated on the pulmonary arterial tree segmented from a human volunteer's CT scan. RESULTS The standard deviation of the difference between the manually measured and Gatortail-based radii in the 3D physical phantom was 0.074 mm (0.087 in-plane pixel units for image voxels of dimension 0.85 × 0.85 × 1.0 mm) over the 64 branches, representing vessel diameters ranging from 1.2 to 7 mm. The linear regression fit gave a slope of 1.056 and an R2 value of 0.989. These three metrics reflect superior agreement of the radii estimates relative to previously published results over all sizes tested. Sizing via matched Gaussian filters resulted in size underestimates of >33% over all three test vessels, while the tubularity-metric matching exhibited a sizing uncertainty of >50%. In the human chest CT data set, the vessel voxel intensity profiles with and without branch model optimization showed excellent agreement and improvement in the objective measure of image similarity. CONCLUSIONS Gatortail has been demonstrated to be an automated, objective, accurate and robust method for sizing of vessels in 3D non-invasively from chest CT scans. We anticipate that Gatortail, an image-based approach to automatically compute estimates of blood vessel radii and trajectories from 3D medical images, will facilitate future quantitative evaluation of vascular response to disease and environmental insult and improve understanding of the biological mechanisms underlying vascular disease processes.
Collapse
Affiliation(s)
- Walter G O'Dell
- Department of Radiation Oncology, University of Florida College of Medicine, Gainesville, FL, 32601, USA
| | - Anne K Gormaley
- Department of Radiation Oncology, University of Florida College of Medicine, Gainesville, FL, 32601, USA
| | - David A Prida
- Department of Radiation Oncology, University of Florida College of Medicine, Gainesville, FL, 32601, USA
| |
Collapse
|
37
|
Shearkhani O, Khademi A, Eilaghi A, Hojjat SP, Symons SP, Heyn C, Machnowska M, Chan A, Sahgal A, Maralani PJ. Detection of Volume-Changing Metastatic Brain Tumors on Longitudinal MRI Using a Semiautomated Algorithm Based on the Jacobian Operator Field. AJNR Am J Neuroradiol 2017; 38:2059-2066. [PMID: 28882862 DOI: 10.3174/ajnr.a5352] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2017] [Accepted: 06/15/2017] [Indexed: 11/07/2022]
Abstract
BACKGROUND AND PURPOSE Accurate follow-up of metastatic brain tumors has important implications for patient prognosis and management. The aim of this study was to develop and evaluate the accuracy of a semiautomated algorithm in detecting growing or shrinking metastatic brain tumors on longitudinal brain MRIs. MATERIALS AND METHODS We used 50 pairs of successive MR imaging datasets, 30 on 1.5T and 20 on 3T, containing contrast-enhanced 3D T1-weighted sequences. These yielded 150 growing or shrinking metastatic brain tumors. To detect them, we completed 2 major steps: 1) spatial normalization and calculation of the Jacobian operator field to quantify changes between scans, and 2) metastatic brain tumor candidate segmentation and detection of volume-changing metastatic brain tumors with the Jacobian operator field. Receiver operating characteristic analysis was used to assess the detection accuracy of the algorithm, and it was verified with jackknife resampling. The reference standard was based on detections by a neuroradiologist. RESULTS The areas under the receiver operating characteristic curves were 0.925 for 1.5T and 0.965 for 3T. Furthermore, at its optimal performance, the algorithm achieved a sensitivity of 85.1% and 92.1% and specificity of 86.7% and 91.3% for 1.5T and 3T, respectively. Vessels were responsible for most false-positives. Newly developed or resolved metastatic brain tumors were a major source of false-negatives. CONCLUSIONS The proposed algorithm could detect volume-changing metastatic brain tumors on longitudinal brain MRIs with statistically high accuracy, demonstrating its potential as a computer-aided change-detection tool for complementing the performance of radiologists, decreasing inter- and intraobserver variability, and improving efficacy.
Collapse
Affiliation(s)
- O Shearkhani
- From the Departments of Medical Imaging (O.S., S.-P.H., S.P.S., C.H., M.M., A.C., P.J.M.)
| | - A Khademi
- Department of Biomedical Engineering (A.K.), Ryerson University, Toronto, Ontario, Canada
| | - A Eilaghi
- Mechanical Engineering Department (A.E.), Australian College of Kuwait, Kuwait City, Kuwait
| | - S-P Hojjat
- From the Departments of Medical Imaging (O.S., S.-P.H., S.P.S., C.H., M.M., A.C., P.J.M.)
| | - S P Symons
- From the Departments of Medical Imaging (O.S., S.-P.H., S.P.S., C.H., M.M., A.C., P.J.M.)
| | - C Heyn
- From the Departments of Medical Imaging (O.S., S.-P.H., S.P.S., C.H., M.M., A.C., P.J.M.)
| | - M Machnowska
- From the Departments of Medical Imaging (O.S., S.-P.H., S.P.S., C.H., M.M., A.C., P.J.M.)
| | - A Chan
- From the Departments of Medical Imaging (O.S., S.-P.H., S.P.S., C.H., M.M., A.C., P.J.M.)
| | - A Sahgal
- Radiation Oncology (A.S.), University of Toronto, Toronto, Ontario, Canada
| | - P J Maralani
- From the Departments of Medical Imaging (O.S., S.-P.H., S.P.S., C.H., M.M., A.C., P.J.M.)
| |
Collapse
|
38
|
Sunwoo L, Kim YJ, Choi SH, Kim KG, Kang JH, Kang Y, Bae YJ, Yoo RE, Kim J, Lee KJ, Lee SH, Choi BS, Jung C, Sohn CH, Kim JH. Computer-aided detection of brain metastasis on 3D MR imaging: Observer performance study. PLoS One 2017; 12:e0178265. [PMID: 28594923 PMCID: PMC5464563 DOI: 10.1371/journal.pone.0178265] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2017] [Accepted: 05/02/2017] [Indexed: 11/19/2022] Open
Abstract
PURPOSE To assess the effect of computer-aided detection (CAD) of brain metastasis (BM) on radiologists' diagnostic performance in interpreting three-dimensional brain magnetic resonance (MR) imaging using follow-up imaging and consensus as the reference standard. MATERIALS AND METHODS The institutional review board approved this retrospective study. The study cohort consisted of 110 consecutive patients with BM and 30 patients without BM. The training data set included MR images of 80 patients with 450 BM nodules. The test set included MR images of 30 patients with 134 BM nodules and 30 patients without BM. We developed a CAD system for BM detection using template-matching and K-means clustering algorithms for candidate detection and an artificial neural network for false-positive reduction. Four reviewers (two neuroradiologists and two radiology residents) interpreted the test set images before and after the use of CAD in a sequential manner. The sensitivity, false positive (FP) per case, and reading time were analyzed. A jackknife free-response receiver operating characteristic (JAFROC) method was used to determine the improvement in the diagnostic accuracy. RESULTS The sensitivity of CAD was 87.3% with an FP per case of 302.4. CAD significantly improved the diagnostic performance of the four reviewers with a figure-of-merit (FOM) of 0.874 (without CAD) vs. 0.898 (with CAD) according to JAFROC analysis (p < 0.01). Statistically significant improvement was noted only for less-experienced reviewers (FOM without vs. with CAD, 0.834 vs. 0.877, p < 0.01). The additional time required to review the CAD results was approximately 72 sec (40% of the total review time). CONCLUSION CAD as a second reader helps radiologists improve their diagnostic performance in the detection of BM on MR imaging, particularly for less-experienced reviewers.
Collapse
Affiliation(s)
- Leonard Sunwoo
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Young Jae Kim
- Department of Biomedical Engineering, Gachon University, Incheon, Korea
- Department of Plasma Bio Display, Kwangwoon University, Seoul, Korea
| | - Seung Hong Choi
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
- Department of Radiology, Seoul National University Hospital, Seoul, Korea
- * E-mail: (SHC); (K-GK)
| | - Kwang-Gi Kim
- Department of Biomedical Engineering, Gachon University, Incheon, Korea
- * E-mail: (SHC); (K-GK)
| | - Ji Hee Kang
- Department of Radiology, Seoul National University Hospital, Seoul, Korea
| | - Yeonah Kang
- Department of Radiology, Seoul Metropolitan Government - Seoul National University Boramae Medical Center, Seoul, Korea
| | - Yun Jung Bae
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Roh-Eul Yoo
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
- Department of Radiology, Seoul National University Hospital, Seoul, Korea
| | - Jihang Kim
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Kyong Joon Lee
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Seung Hyun Lee
- Department of Plasma Bio Display, Kwangwoon University, Seoul, Korea
| | - Byung Se Choi
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Cheolkyu Jung
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| | - Chul-Ho Sohn
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
- Department of Radiology, Seoul National University Hospital, Seoul, Korea
| | - Jae Hyoung Kim
- Department of Radiology, Seoul National University College of Medicine, Seoul, Korea
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, Korea
| |
Collapse
|
39
|
Rundo L, Stefano A, Militello C, Russo G, Sabini MG, D'Arrigo C, Marletta F, Ippolito M, Mauri G, Vitabile S, Gilardi MC. A fully automatic approach for multimodal PET and MR image segmentation in gamma knife treatment planning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 144:77-96. [PMID: 28495008 DOI: 10.1016/j.cmpb.2017.03.011] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2016] [Revised: 12/28/2016] [Accepted: 03/14/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND OBJECTIVES Nowadays, clinical practice in Gamma Knife treatments is generally based on MRI anatomical information alone. However, the joint use of MRI and PET images can be useful for considering both anatomical and metabolic information about the lesion to be treated. In this paper we present a co-segmentation method to integrate the segmented Biological Target Volume (BTV), using [11C]-Methionine-PET (MET-PET) images, and the segmented Gross Target Volume (GTV), on the respective co-registered MR images. The resulting volume gives enhanced brain tumor information to be used in stereotactic neuro-radiosurgery treatment planning. GTV often does not match entirely with BTV, which provides metabolic information about brain lesions. For this reason, PET imaging is valuable and it could be used to provide complementary information useful for treatment planning. In this way, BTV can be used to modify GTV, enhancing Clinical Target Volume (CTV) delineation. METHODS A novel fully automatic multimodal PET/MRI segmentation method for Leksell Gamma Knife® treatments is proposed. This approach improves and combines two computer-assisted and operator-independent single modality methods, previously developed and validated, to segment BTV and GTV from PET and MR images, respectively. In addition, the GTV is utilized to combine the superior contrast of PET images with the higher spatial resolution of MRI, obtaining a new BTV, called BTVMRI. A total of 19 brain metastatic tumors, undergone stereotactic neuro-radiosurgery, were retrospectively analyzed. A framework for the evaluation of multimodal PET/MRI segmentation is also presented. Overlap-based and spatial distance-based metrics were considered to quantify similarity concerning PET and MRI segmentation approaches. Statistics was also included to measure correlation among the different segmentation processes. Since it is not possible to define a gold-standard CTV according to both MRI and PET images without treatment response assessment, the feasibility and the clinical value of BTV integration in Gamma Knife treatment planning were considered. Therefore, a qualitative evaluation was carried out by three experienced clinicians. RESULTS The achieved experimental results showed that GTV and BTV segmentations are statistically correlated (Spearman's rank correlation coefficient: 0.898) but they have low similarity degree (average Dice Similarity Coefficient: 61.87 ± 14.64). Therefore, volume measurements as well as evaluation metrics values demonstrated that MRI and PET convey different but complementary imaging information. GTV and BTV could be combined to enhance treatment planning. In more than 50% of cases the CTV was strongly or moderately conditioned by metabolic imaging. Especially, BTVMRI enhanced the CTV more accurately than BTV in 25% of cases. CONCLUSIONS The proposed fully automatic multimodal PET/MRI segmentation method is a valid operator-independent methodology helping the clinicians to define a CTV that includes both metabolic and morphologic information. BTVMRI and GTV should be considered for a comprehensive treatment planning.
Collapse
Affiliation(s)
- Leonardo Rundo
- Istituto di Bioimmagini e Fisiologia Molecolare - Consiglio Nazionale delle Ricerche (IBFM-CNR), Cefalù (PA), Italy; Dipartimento di Informatica, Sistemistica e Comunicazione (DISCo), Università degli Studi di Milano-Bicocca, Milano, Italy
| | - Alessandro Stefano
- Istituto di Bioimmagini e Fisiologia Molecolare - Consiglio Nazionale delle Ricerche (IBFM-CNR), Cefalù (PA), Italy; Dipartimento di Ingegneria Chimica, Gestionale, Informatica, Meccanica (DICGIM), Università degli Studi di Palermo, Palermo, Italy
| | - Carmelo Militello
- Istituto di Bioimmagini e Fisiologia Molecolare - Consiglio Nazionale delle Ricerche (IBFM-CNR), Cefalù (PA), Italy.
| | - Giorgio Russo
- Istituto di Bioimmagini e Fisiologia Molecolare - Consiglio Nazionale delle Ricerche (IBFM-CNR), Cefalù (PA), Italy; Azienda Ospedaliera per l'Emergenza Cannizzaro, Catania, Italy
| | | | | | | | | | - Giancarlo Mauri
- Dipartimento di Informatica, Sistemistica e Comunicazione (DISCo), Università degli Studi di Milano-Bicocca, Milano, Italy
| | - Salvatore Vitabile
- Dipartimento di Biopatologia e Biotecnologie Mediche (DIBIMED), Università degli Studi di Palermo, Palermo, Italy
| | - Maria Carla Gilardi
- Istituto di Bioimmagini e Fisiologia Molecolare - Consiglio Nazionale delle Ricerche (IBFM-CNR), Cefalù (PA), Italy
| |
Collapse
|
40
|
Koley S, Chakraborty C, Mainero C, Fischl B, Aganj I. A Fast Approach to Automatic Detection of Brain Lesions. BRAINLESION : GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES. BRAINLES (WORKSHOP) 2017; 10154:52-61. [PMID: 29082383 DOI: 10.1007/978-3-319-55524-9_6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
Template matching is a popular approach to computer-aided detection of brain lesions from magnetic resonance (MR) images. The outcomes are often sufficient for localizing lesions and assisting clinicians in diagnosis. However, processing large MR volumes with three-dimensional (3D) templates is demanding in terms of computational resources, hence the importance of the reduction of computational complexity of template matching, particularly in situations in which time is crucial (e.g. emergent stroke). In view of this, we make use of 3D Gaussian templates with varying radii and propose a new method to compute the normalized cross-correlation coefficient as a similarity metric between the MR volume and the template to detect brain lesions. Contrary to the conventional fast Fourier transform (FFT) based approach, whose runtime grows as O(N logN) with the number of voxels, the proposed method computes the cross-correlation in O(N). We show through our experiments that the proposed method outperforms the FFT approach in terms of computational time, and retains comparable accuracy.
Collapse
Affiliation(s)
- Subhranil Koley
- Athinoula A. Martinos Center for Biomedical Imaging, Radiology Department, Massachusetts General Hospital, Charlestown, MA, USA.,School of Medical Science and Technology, Indian Institute of Technology Kharagpur, Kharagpur, WB, India 721302
| | - Chandan Chakraborty
- School of Medical Science and Technology, Indian Institute of Technology Kharagpur, Kharagpur, WB, India 721302
| | - Caterina Mainero
- Athinoula A. Martinos Center for Biomedical Imaging, Radiology Department, Massachusetts General Hospital, Charlestown, MA, USA.,Radiology Department, Harvard Medical School, Boston, MA, USA
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Radiology Department, Massachusetts General Hospital, Charlestown, MA, USA.,Radiology Department, Harvard Medical School, Boston, MA, USA.,Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Iman Aganj
- Athinoula A. Martinos Center for Biomedical Imaging, Radiology Department, Massachusetts General Hospital, Charlestown, MA, USA.,Radiology Department, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
41
|
Semi-automatic Brain Lesion Segmentation in Gamma Knife Treatments Using an Unsupervised Fuzzy C-Means Clustering Technique. ACTA ACUST UNITED AC 2016. [DOI: 10.1007/978-3-319-33747-0_2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
42
|
Pérez-Ramírez Ú, Arana E, Moratal D. Brain metastases detection on MR by means of three-dimensional tumor-appearance template matching. J Magn Reson Imaging 2016; 44:642-52. [DOI: 10.1002/jmri.25207] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2015] [Accepted: 02/09/2016] [Indexed: 12/21/2022] Open
Affiliation(s)
- Úrsula Pérez-Ramírez
- Center for Biomaterials and Tissue Engineering; Universitat Politècnica de València; Valencia Spain
| | - Estanislao Arana
- Department of Radiology; Fundación Instituto Valenciano de Oncología; Valencia Spain
| | - David Moratal
- Center for Biomaterials and Tissue Engineering; Universitat Politècnica de València; Valencia Spain
| |
Collapse
|
43
|
Perez-Ramirez U, Arana E, Moratal D. Computer-aided detection of brain metastases using a three-dimensional template-based matching algorithm. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:2384-7. [PMID: 25570469 DOI: 10.1109/embc.2014.6944101] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The purpose of this work was to develop an algorithm for detecting brain metastases in magnetic resonance imaging (MRI), emphasizing the reduction of false positives. Firstly, three-dimensional templates were cross-correlated with the brain volume. Afterwards, each lesion candidate was segmented in the three orthogonal views as a previous step to remove elongated structures such as blood vessels. In a database containing 19 patients and 62 brain metastases, detection algorithm showed a sensitivity of 93.55%. After applying the method for false positive reduction, encouraging results were obtained: false positive rate per slice decreased from 0.64 to 0.15 and only one metastasis was removed, leading to a sensitivity of 91.94%.
Collapse
|
44
|
Nowinski WL, Qian G, Hanley DF. A CAD System for Hemorrhagic Stroke. Neuroradiol J 2014; 27:409-16. [PMID: 25196612 DOI: 10.15274/nrj-2014-10080] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2014] [Accepted: 07/14/2014] [Indexed: 11/12/2022] Open
Abstract
Computer-aided detection/diagnosis (CAD) is a key component of routine clinical practice, increasingly used for detection, interpretation, quantification and decision support. Despite a critical need, there is no clinically accepted CAD system for stroke yet. Here we introduce a CAD system for hemorrhagic stroke. This CAD system segments, quantifies, and displays hematoma in 2D/3D, and supports evacuation of hemorrhage by thrombolytic treatment monitoring progression and quantifying clot removal. It supports seven-step workflow: select patient, add a new study, process patient's scans, show segmentation results, plot hematoma volumes, show 3D synchronized time series hematomas, and generate report. The system architecture contains four components: library, tools, application with user interface, and hematoma segmentation algorithm. The tools include a contour editor, 3D surface modeler, 3D volume measure, histogramming, hematoma volume plot, and 3D synchronized time-series hematoma display. The CAD system has been designed and implemented in C++. It has also been employed in the CLEAR and MISTIE phase-III, multicenter clinical trials. This stroke CAD system is potentially useful in research and clinical applications, particularly for clinical trials.
Collapse
Affiliation(s)
- Wieslaw L Nowinski
- Biomedical Imaging Laboratory, Agency for Science Technology and Research; Singapore, Singapore -
| | - Guoyu Qian
- Biomedical Imaging Laboratory, Agency for Science Technology and Research; Singapore, Singapore
| | | |
Collapse
|
45
|
Kwon H, Mo Jung Y, Park J, Keun Seo J. A new computer-aided method for detecting brain metastases on contrast-enhanced MR images. ACTA ACUST UNITED AC 2014. [DOI: 10.3934/ipi.2014.8.491] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
46
|
Seminati E, Nardello F, Zamparo P, Ardigò LP, Faccioli N, Minetti AE. Anatomically asymmetrical runners move more asymmetrically at the same metabolic cost. PLoS One 2013; 8:e74134. [PMID: 24086316 PMCID: PMC3782489 DOI: 10.1371/journal.pone.0074134] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2013] [Accepted: 07/27/2013] [Indexed: 11/30/2022] Open
Abstract
We hypothesized that, as occurring in cars, body structural asymmetries could generate asymmetry in the kinematics/dynamics of locomotion, ending up in a higher metabolic cost of transport, i.e. more ‘fuel’ needed to travel a given distance. Previous studies found the asymmetries in horses’ body negatively correlated with galloping performance. In this investigation, we analyzed anatomical differences between the left and right lower limbs as a whole by performing 3D cross-correlation of Magnetic Resonance Images of 19 male runners, clustered as Untrained Runners, Occasional Runners and Skilled Runners. Running kinematics of their body centre of mass were obtained from the body segments coordinates measured by a 3D motion capture system at incremental running velocities on a treadmill. A recent mathematical procedure quantified the asymmetry of the body centre of mass trajectory between the left and right steps. During the same sessions, runners’ metabolic consumption was measured and the cost of transport was calculated. No correlations were found between anatomical/kinematic variables and the metabolic cost of transport, regardless of the training experience. However, anatomical symmetry significant correlated to the kinematic symmetry, and the most trained subjects showed the highest level of kinematic symmetry during running. Results suggest that despite the significant effects of anatomical asymmetry on kinematics, either those changes are too small to affect economy or some plastic compensation in the locomotor system mitigates the hypothesized change in energy expenditure of running.
Collapse
Affiliation(s)
- Elena Seminati
- Department of Pathophysiology and Transplantation, Faculty of Medicine, University of Milan, Milan, Italy
- * E-mail:
| | - Francesca Nardello
- Department of Neurological and Movement Sciences, School of Exercise and Sport Sciences, University of Verona, Verona, Italy
| | - Paola Zamparo
- Department of Neurological and Movement Sciences, School of Exercise and Sport Sciences, University of Verona, Verona, Italy
| | - Luca P. Ardigò
- Department of Neurological and Movement Sciences, School of Exercise and Sport Sciences, University of Verona, Verona, Italy
| | - Niccolò Faccioli
- Department of Pathology and Diagnostics, Section of Radiology, University of Verona, Verona, Italy
| | - Alberto E. Minetti
- Department of Pathophysiology and Transplantation, Faculty of Medicine, University of Milan, Milan, Italy
| |
Collapse
|
47
|
Bauer S, Wiest R, Nolte LP, Reyes M. A survey of MRI-based medical image analysis for brain tumor studies. Phys Med Biol 2013; 58:R97-129. [PMID: 23743802 DOI: 10.1088/0031-9155/58/13/r97] [Citation(s) in RCA: 306] [Impact Index Per Article: 27.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines.
Collapse
Affiliation(s)
- Stefan Bauer
- Institute for Surgical Technology and Biomechanics, University of Bern, Switzerland.
| | | | | | | |
Collapse
|
48
|
Takenaga T, Uchiyama Y, Hirai T, Nakamura H, Kai Y, Katsuragawa S, Shiraishi J. [Computer-aided detection of metastatic brain tumors in magnetic resonance images]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2013; 69:632-640. [PMID: 23782775 DOI: 10.6009/jjrt.2013_jsrt_69.6.632] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
The fact that accurate detection of metastatic brain tumors is important for making decisions on the treatment course of patients prompted us to develop a computer-aided diagnostic scheme for detecting metastatic brain tumors. In this paper, we first describe how we extracted the cerebral parenchyma region using a standard deviation filter. Second, initial candidates for tumors were decided by sphericity and cross-correlation value with a simulated ring template. Third, we made true positive and false positive templates obtained from actual clinical images and applied the template matching technique to them. Finally, we detected metastatic tumors using these two characteristics. Our improved method was applied to 13 cases with 97 brain metastases. Sensitivity of detection of metastatic brain tumors was 80.4%, with 5.6 false positives per patient. Our proposed method has potential for detection of metastatic brain tumors in brain magnetic resonance (MR) images.
Collapse
|
49
|
Computer-Aided Detection of Metastatic Brain Tumors Using Magnetic Resonance Black-Blood Imaging. Invest Radiol 2013; 48:113-9. [DOI: 10.1097/rli.0b013e318277f078] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
50
|
Farjam R, Parmar HA, Noll DC, Tsien CI, Cao Y. An approach for computer-aided detection of brain metastases in post-Gd T1-W MRI. Magn Reson Imaging 2012; 30:824-36. [PMID: 22521993 DOI: 10.1016/j.mri.2012.02.024] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2011] [Revised: 01/20/2012] [Accepted: 02/17/2012] [Indexed: 11/25/2022]
Abstract
PURPOSE To develop an approach for computer-aided detection (CAD) of small brain metastases in post-Gd T1-weighted magnetic resonance imaging (MRI). METHOD A set of unevenly spaced 3D spherical shell templates was optimized to localize brain metastatic lesions by cross-correlation analysis with MRI. Theoretical and simulation analyses of effects of lesion size and shape heterogeneity were performed to optimize the number and size of the templates and the cross-correlation thresholds. Also, effects of image factors of noise and intensity variation on the performance of the CAD system were investigated. A nodule enhancement strategy to improve sensitivity of the system and a set of criteria based upon the size, shape and brightness of lesions were used to reduce false positives. An optimal set of parameters from the FROC curves was selected from a training dataset, and then the system was evaluated on a testing dataset including 186 lesions from 2753 MRI slices. Reading results from two radiologists are also included. RESULTS Overall, a 93.5% sensitivity with 0.024 of intra-cranial false positive rate (IC-FPR) was achieved in the testing dataset. Our investigation indicated that nodule enhancement was very effective in improving both sensitivity and specificity. The size and shape criteria reduced the IC-FPR from 0.075 to 0.021, and the brightness criterion decreases the extra-cranial FPR from 0.477 to 0.083 in the training dataset. Readings from the two radiologists had sensitivities of 60% and 67% in the training dataset and 70% and 80% in the testing dataset for the metastatic lesions <5 mm in diameter. CONCLUSION Our proposed CAD system has high sensitivity and fairly low FPR for detection of the small brain metastatic lesions in MRI compared to the previous work and readings of neuroradiologists. The potential of this method for assisting clinical decision- making warrants further evaluation and improvements.
Collapse
Affiliation(s)
- Reza Farjam
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109-2099, USA
| | | | | | | | | |
Collapse
|