1
|
Xu C, Liu X, Bao B, Liu C, Li R, Yang T, Wu Y, Zhang Y, Tang J. Two-Stage Deep Learning Model for Diagnosis of Lumbar Spondylolisthesis Based on Lateral X-Ray Images. World Neurosurg 2024; 186:e652-e661. [PMID: 38608811 DOI: 10.1016/j.wneu.2024.04.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Accepted: 04/04/2024] [Indexed: 04/14/2024]
Abstract
BACKGROUND Diagnosing early lumbar spondylolisthesis is challenging for many doctors because of the lack of obvious symptoms. Using deep learning (DL) models to improve the accuracy of X-ray diagnoses can effectively reduce missed and misdiagnoses in clinical practice. This study aimed to use a two-stage deep learning model, the Res-SE-Net model with the YOLOv8 algorithm, to facilitate efficient and reliable diagnosis of early lumbar spondylolisthesis based on lateral X-ray image identification. METHODS A total of 2424 lumbar lateral radiographs of patients treated in the Beijing Tongren Hospital between January 2021 and September 2023 were obtained. The data were labeled and mutually identified by 3 orthopedic surgeons after reshuffling in a random order and divided into a training set, validation set, and test set in a ratio of 7:2:1. We trained 2 models for automatic detection of spondylolisthesis. YOLOv8 model was used to detect the position of lumbar spondylolisthesis, and the Res-SE-Net classification method was designed to classify the clipped area and determine whether it was lumbar spondylolisthesis. The model performance was evaluated using a test set and an external dataset from Beijing Haidian Hospital. Finally, we compared model validation results with professional clinicians' evaluation. RESULTS The model achieved promising results, with a high diagnostic accuracy of 92.3%, precision of 93.5%, and recall of 93.1% for spondylolisthesis detection on the test set, the area under the curve (AUC) value was 0.934. CONCLUSIONS Our two-stage deep learning model provides doctors with a reference basis for the better diagnosis and treatment of early lumbar spondylolisthesis.
Collapse
Affiliation(s)
- Chunyang Xu
- Department of orthopedics, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xingyu Liu
- School of Life Sciences, Tsinghua University, Beijing, China; Institute of Biomedical and Health Engineering (iBHE), Tsinghua Shenzhen International Graduate School, Shenzhen, China; Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; Longwood Valley Medical Technology Co Ltd, Beijing, China
| | - Beixi Bao
- Department of orthopedics, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chang Liu
- Department of Minimally Invasive Spine Surgery, Beijing Haidian Hospital, Peking University, China
| | - Runchao Li
- Longwood Valley Medical Technology Co Ltd, Beijing, China
| | - Tianci Yang
- Department of orthopedics, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yukan Wu
- Department of orthopedics, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yiling Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; Longwood Valley Medical Technology Co Ltd, Beijing, China
| | - Jiaguang Tang
- Department of orthopedics, Beijing Tongren Hospital, Capital Medical University, Beijing, China.
| |
Collapse
|
2
|
Kim M, Wang JY, Lu W, Jiang H, Stojadinovic S, Wardak Z, Dan T, Timmerman R, Wang L, Chuang C, Szalkowski G, Liu L, Pollom E, Rahimy E, Soltys S, Chen M, Gu X. Where Does Auto-Segmentation for Brain Metastases Radiosurgery Stand Today? Bioengineering (Basel) 2024; 11:454. [PMID: 38790322 PMCID: PMC11117895 DOI: 10.3390/bioengineering11050454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 04/26/2024] [Accepted: 04/30/2024] [Indexed: 05/26/2024] Open
Abstract
Detection and segmentation of brain metastases (BMs) play a pivotal role in diagnosis, treatment planning, and follow-up evaluations for effective BM management. Given the rising prevalence of BM cases and its predominantly multiple onsets, automated segmentation is becoming necessary in stereotactic radiosurgery. It not only alleviates the clinician's manual workload and improves clinical workflow efficiency but also ensures treatment safety, ultimately improving patient care. Recent strides in machine learning, particularly in deep learning (DL), have revolutionized medical image segmentation, achieving state-of-the-art results. This review aims to analyze auto-segmentation strategies, characterize the utilized data, and assess the performance of cutting-edge BM segmentation methodologies. Additionally, we delve into the challenges confronting BM segmentation and share insights gleaned from our algorithmic and clinical implementation experiences.
Collapse
Affiliation(s)
- Matthew Kim
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Jen-Yeu Wang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Weiguo Lu
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Hao Jiang
- NeuralRad LLC, Madison, WI 53717, USA
| | | | - Zabi Wardak
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Tu Dan
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Robert Timmerman
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Lei Wang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Cynthia Chuang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Gregory Szalkowski
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Lianli Liu
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Erqi Pollom
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Elham Rahimy
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Scott Soltys
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
| | - Mingli Chen
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Xuejun Gu
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
3
|
Gao Z, Guo Y, Wang G, Chen X, Cao X, Zhang C, An S, Xu F. Robust deep learning from incomplete annotation for accurate lung nodule detection. Comput Biol Med 2024; 173:108361. [PMID: 38569236 DOI: 10.1016/j.compbiomed.2024.108361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 03/02/2024] [Accepted: 03/20/2024] [Indexed: 04/05/2024]
Abstract
Deep learning plays a significant role in the detection of pulmonary nodules in low-dose computed tomography (LDCT) scans, contributing to the diagnosis and treatment of lung cancer. Nevertheless, its effectiveness often relies on the availability of extensive, meticulously annotated dataset. In this paper, we explore the utilization of an incompletely annotated dataset for pulmonary nodules detection and introduce the FULFIL (Forecasting Uncompleted Labels For Inexpensive Lung nodule detection) algorithm as an innovative approach. By instructing annotators to label only the nodules they are most confident about, without requiring complete coverage, we can substantially reduce annotation costs. Nevertheless, this approach results in an incompletely annotated dataset, which presents challenges when training deep learning models. Within the FULFIL algorithm, we employ Graph Convolution Network (GCN) to discover the relationships between annotated and unannotated nodules for self-adaptively completing the annotation. Meanwhile, a teacher-student framework is employed for self-adaptive learning using the completed annotation dataset. Furthermore, we have designed a Dual-Views loss to leverage different data perspectives, aiding the model in acquiring robust features and enhancing generalization. We carried out experiments using the LUng Nodule Analysis (LUNA) dataset, achieving a sensitivity of 0.574 at a False positives per scan (FPs/scan) of 0.125 with only 10% instance-level annotations for nodules. This performance outperformed comparative methods by 7.00%. Experimental comparisons were conducted to evaluate the performance of our model and human experts on test dataset. The results demonstrate that our model can achieve a comparable level of performance to that of human experts. The comprehensive experimental results demonstrate that FULFIL can effectively leverage an incomplete pulmonary nodule dataset to develop a robust deep learning model, making it a promising tool for assisting in lung nodule detection.
Collapse
Affiliation(s)
- Zebin Gao
- School of Information Science and Technology, Fudan University, Shanghai 200438, China
| | - Yuchen Guo
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
| | - Guoxin Wang
- JD Health International Inc, Beijing 100176, China
| | - Xiangru Chen
- Hangzhou Zhuoxi Institute of Brain and Intelligence, Hangzhou 311100, China
| | - Xuyang Cao
- JD Health International Inc, Beijing 100176, China
| | - Chao Zhang
- JD Health International Inc, Beijing 100176, China
| | - Shan An
- JD Health International Inc, Beijing 100176, China
| | - Feng Xu
- School of Software, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
4
|
Liu L, Zhang H, Zhang W, Mei W, Huang R. Sacroiliitis diagnosis based on interpretable features and multi-task learning. Phys Med Biol 2024; 69:045034. [PMID: 38237177 DOI: 10.1088/1361-6560/ad2010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 01/18/2024] [Indexed: 02/17/2024]
Abstract
Objective.Sacroiliitis is an early pathological manifestation of ankylosing spondylitis (AS), and a positive sacroiliitis test on imaging may help clinical practitioners diagnose AS early. Deep learning based automatic diagnosis algorithms can deliver grading findings for sacroiliitis, however, it requires a large amount of data with precise labels to train the model and lacks grading features visualization. In this paper, we aimed to propose a radiomics and deep learning based deep feature visualization positive diagnosis algorithm for sacroiliitis on CT scans. Visualization of grading features can enhance clinical interpretability with visual grading features, which assist doctors in diagnosis and treatment more effectively.Approach.The region of interest (ROI) is identified by segmenting the sacroiliac joint (SIJ) 3D CT images using a combination of the U-net model and certain statistical approaches. Then, in addition to extracting spatial and frequency domain features from ROI according to the radiographic manifestations of sacroiliitis, the radiomics features have also been integrated into the proposed encoder module to obtain a powerful encoder and extract features effectively. Finally, a multi-task learning technique and five-class labels are utilized to help with performing positive tests to reduce discrepancies in the evaluation of several radiologists.Main results.On our private dataset, proposed methods have obtained an accuracy rate of 87.3%, which is 9.8% higher than the baseline and consistent with assessments made by qualified medical professionals.Significance.The results of the ablation experiment and interpreting analysis demonstrated that the proposed methods are applied in automatic CT scan sacroiliitis diagnosis due to their excellently interpretable and portable advantages.
Collapse
Affiliation(s)
- Lei Liu
- Medical College, Shantou University, Shantou, Guangdong, 515041, People's Republic of China
| | - Haoyu Zhang
- College of Engineering, Shantou University, Shantou, Guangdong, 515063, People's Republic of China
| | - Weifeng Zhang
- College of Engineering, Shantou University, Shantou, Guangdong, 515063, People's Republic of China
| | - Wei Mei
- Department of Radiology, The First Affiliated Hospital of Shantou University Medical College, Shantou, Guangdong, 515041, People's Republic of China
| | - Ruibin Huang
- Department of Radiology, The First Affiliated Hospital of Shantou University Medical College, Shantou, Guangdong, 515041, People's Republic of China
| |
Collapse
|
5
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RLJ, Liu T, Wang T, Yang X. Deep learning in MRI-guided radiation therapy: A systematic review. J Appl Clin Med Phys 2024; 25:e14155. [PMID: 37712893 PMCID: PMC10860468 DOI: 10.1002/acm2.14155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/10/2023] [Accepted: 08/21/2023] [Indexed: 09/16/2023] Open
Abstract
Recent advances in MRI-guided radiation therapy (MRgRT) and deep learning techniques encourage fully adaptive radiation therapy (ART), real-time MRI monitoring, and the MRI-only treatment planning workflow. Given the rapid growth and emergence of new state-of-the-art methods in these fields, we systematically review 197 studies written on or before December 31, 2022, and categorize the studies into the areas of image segmentation, image synthesis, radiomics, and real time MRI. Building from the underlying deep learning methods, we discuss their clinical importance and current challenges in facilitating small tumor segmentation, accurate x-ray attenuation information from MRI, tumor characterization and prognosis, and tumor motion tracking. In particular, we highlight the recent trends in deep learning such as the emergence of multi-modal, visual transformer, and diffusion models.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Richard L. J. Qiu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | - Tonghe Wang
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| |
Collapse
|
6
|
Wang JY, Qu V, Hui C, Sandhu N, Mendoza MG, Panjwani N, Chang YC, Liang CH, Lu JT, Wang L, Kovalchuk N, Gensheimer MF, Soltys SG, Pollom EL. Stratified assessment of an FDA-cleared deep learning algorithm for automated detection and contouring of metastatic brain tumors in stereotactic radiosurgery. Radiat Oncol 2023; 18:61. [PMID: 37016416 PMCID: PMC10074777 DOI: 10.1186/s13014-023-02246-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 03/14/2023] [Indexed: 04/06/2023] Open
Abstract
PURPOSE Artificial intelligence-based tools can be leveraged to improve detection and segmentation of brain metastases for stereotactic radiosurgery (SRS). VBrain by Vysioneer Inc. is a deep learning algorithm with recent FDA clearance to assist in brain tumor contouring. We aimed to assess the performance of this tool by various demographic and clinical characteristics among patients with brain metastases treated with SRS. MATERIALS AND METHODS We randomly selected 100 patients with brain metastases who underwent initial SRS on the CyberKnife from 2017 to 2020 at a single institution. Cases with resection cavities were excluded from the analysis. Computed tomography (CT) and axial T1-weighted post-contrast magnetic resonance (MR) image data were extracted for each patient and uploaded to VBrain. A brain metastasis was considered "detected" when the VBrain- "predicted" contours overlapped with the corresponding physician contours ("ground-truth" contours). We evaluated performance of VBrain against ground-truth contours using the following metrics: lesion-wise Dice similarity coefficient (DSC), lesion-wise average Hausdorff distance (AVD), false positive count (FP), and lesion-wise sensitivity (%). Kruskal-Wallis tests were performed to assess the relationships between patient characteristics including sex, race, primary histology, age, and size and number of brain metastases, and performance metrics such as DSC, AVD, FP, and sensitivity. RESULTS We analyzed 100 patients with 435 intact brain metastases treated with SRS. Our cohort consisted of patients with a median number of 2 brain metastases (range: 1 to 52), median age of 69 (range: 19 to 91), and 50% male and 50% female patients. The primary site breakdown was 56% lung, 10% melanoma, 9% breast, 8% gynecological, 5% renal, 4% gastrointestinal, 2% sarcoma, and 6% other, while the race breakdown was 60% White, 18% Asian, 3% Black/African American, 2% Native Hawaiian or other Pacific Islander, and 17% other/unknown/not reported. The median tumor size was 0.112 c.c. (range: 0.010-26.475 c.c.). We found mean lesion-wise DSC to be 0.723, mean lesion-wise AVD to be 7.34% of lesion size (0.704 mm), mean FP count to be 0.72 tumors per case, and lesion-wise sensitivity to be 89.30% for all lesions. Moreover, mean sensitivity was found to be 99.07%, 97.59%, and 96.23% for lesions with diameter equal to and greater than 10 mm, 7.5 mm, and 5 mm, respectively. No other significant differences in performance metrics were observed across demographic or clinical characteristic groups. CONCLUSION In this study, a commercial deep learning algorithm showed promising results in segmenting brain metastases, with 96.23% sensitivity for metastases with diameters of 5 mm or higher. As the software is an assistive AI, future work of VBrain integration into the clinical workflow can provide further clinical and research insights.
Collapse
Affiliation(s)
- Jen-Yeu Wang
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Vera Qu
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Caressa Hui
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Navjot Sandhu
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Maria G Mendoza
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Neil Panjwani
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | | | | | | | - Lei Wang
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Nataliya Kovalchuk
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Michael F Gensheimer
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Scott G Soltys
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA
| | - Erqi L Pollom
- Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Stanford, CA, 94305, USA.
| |
Collapse
|
7
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RL, Liu T, Wang T, Yang X. Deep Learning in MRI-guided Radiation Therapy: A Systematic Review. ARXIV 2023:arXiv:2303.11378v2. [PMID: 36994167 PMCID: PMC10055493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
MRI-guided radiation therapy (MRgRT) offers a precise and adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed. MRI-guided radiation therapy offers a precise, adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed with emphasis placed on underlying methods. Studies are further categorized into the areas of segmentation, synthesis, radiomics, and real time MRI. Finally, clinical implications, current challenges, and future directions are discussed.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| |
Collapse
|
8
|
Ozkara BB, Federau C, Dagher SA, Pattnaik D, Ucisik FE, Chen MM, Wintermark M. Correlating volumetric and linear measurements of brain metastases on MRI scans using intelligent automation software: a preliminary study. J Neurooncol 2023; 162:363-371. [PMID: 36988746 DOI: 10.1007/s11060-023-04297-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 03/13/2023] [Indexed: 03/30/2023]
Abstract
PURPOSE The Response Assessment in Neuro-Oncology Brain Metastases (RANO-BM) working group proposed a guide for treatment responses for BMs by utilizing the longest diameter; however, despite recognizing that many patients with BMs have sub-centimeter lesions, the group referred to these lesions as unmeasurable due to issues with repeatability and interpretation. In light of RANO-BM recommendations, we aimed to correlate linear and volumetric measurements in sub-centimeter BMs on contrast-enhanced MRI using intelligent automation software. METHODS In this retrospective study, patients with BMs scanned with MRI between January 1, 2018, and December 31, 2021, were screened. Inclusion criteria were: (1) at least one sub-centimeter BM with an integer millimeter-longest diameter was noted in the MRI report; (2) patients were a minimum of 18 years of age; (3) patients with available pre-treatment three-dimensional T1-weighted spoiled gradient-echo MRI scan. The screening was terminated when there were 20 lesions in each group. Lesion volumes were measured with the help of intelligent automation software Jazz (AI Medical, Zollikon, Switzerland) by two readers. The Kruskal-Wallis test was used to compare volumetric differences. RESULTS Our study included 180 patients. The agreement for volumetric measurements was excellent between the two readers. The volumes of the following groups were not significantly different: 1-2 mm, 1-3 mm, 1-4 mm, 2-3 mm, 2-4 mm, 3-4 mm, 3-5 mm, 4-5 mm, 5-6 mm, 5-7 mm, 6-7 mm, 6-8 mm, 6-9 mm, 7-8 mm, 7-9 mm, 8-9 mm. CONCLUSION Our findings indicate that the largest diameter of a lesion may not accurately represent its volume. Additional research is required to determine which method is superior for measuring radiologic response to therapy and which parameter correlates best with clinical improvement or deterioration.
Collapse
Affiliation(s)
- Burak B Ozkara
- Department of Neuroradiology, MD Anderson Cancer Center, 1400 Pressler Street, Houston, TX, 77030, USA
| | - Christian Federau
- Faculty of Medicine, University of Zurich, Pestalozzistrasse 3, Zurich, CH-8032, Switzerland
| | - Samir A Dagher
- Department of Neuroradiology, MD Anderson Cancer Center, 1400 Pressler Street, Houston, TX, 77030, USA
| | - Debajani Pattnaik
- Department of Neuroradiology, MD Anderson Cancer Center, 1400 Pressler Street, Houston, TX, 77030, USA
| | - F Eymen Ucisik
- Department of Neuroradiology, MD Anderson Cancer Center, 1400 Pressler Street, Houston, TX, 77030, USA
| | - Melissa M Chen
- Department of Neuroradiology, MD Anderson Cancer Center, 1400 Pressler Street, Houston, TX, 77030, USA
| | - Max Wintermark
- Department of Neuroradiology, MD Anderson Cancer Center, 1400 Pressler Street, Houston, TX, 77030, USA.
| |
Collapse
|
9
|
Yu H, Zhang Z, Xia W, Liu Y, Liu L, Luo W, Zhou J, Zhang Y. DeSeg: auto detector-based segmentation for brain metastases. Phys Med Biol 2023; 68. [PMID: 36535028 DOI: 10.1088/1361-6560/acace7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
Delineation of brain metastases (BMs) is a paramount step in stereotactic radiosurgery treatment. Clinical practice has specific expectation on BM auto-delineation that the method is supposed to avoid missing of small lesions and yield accurate contours for large lesions. In this study, we propose a novel coarse-to-fine framework, named detector-based segmentation (DeSeg), to incorporate object-level detection into pixel-wise segmentation so as to meet the clinical demand. DeSeg consists of three components: a center-point-guided single-shot detector to localize the potential lesion regions, a multi-head U-Net segmentation model to refine contours, and a data cascade unit to connect both tasks smoothly. Performance on tiny lesions is measured by the object-based sensitivity and positive predictive value (PPV), while that on large lesions is quantified by dice similarity coefficient (DSC), average symmetric surface distance (ASSD) and 95% Hausdorff distance (HD95). Besides, computational complexity is also considered to study the potential of method in real-time processing. This study retrospectively collected 240 BM patients with Gadolinium injected contrast-enhanced T1-weighted magnetic resonance imaging (T1c-MRI), which were randomly split into training, validating and testing datasets (192, 24 and 24 scans, respectively). The lesions in the testing dataset were further divided into two groups based on the volume size (smallS: ≤1.5 cc,N= 88; largeL: > 1.5 cc,N= 15). On average, DeSeg yielded a sensitivity of 0.91 and a PPV of 0.77 on S group, and a DSC of 0.86, an ASSD 0f 0.76 mm and a HD95 of 2.31 mm onLgroup. The results indicated that DeSeg achieved leading sensitivity and PPV for tiny lesions as well as segmentation metrics for large ones. After our clinical validation, DeSeg showed competitive segmentation performance while kept faster processing speed comparing with existing 3D models.
Collapse
Affiliation(s)
- Hui Yu
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Zhongzhou Zhang
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Wenjun Xia
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Yan Liu
- College of Electrical Engineering, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Lunxin Liu
- Department of Neurosurgery, West China Hospital of Sichuan University, Chengdu, 610044, People's Republic of China
| | - Wuman Luo
- School of Applied Sciences, Macao Polytechnic University, Macao, 999078, People's Republic of China
| | - Jiliu Zhou
- College of Computer Science, Sichuan University, Chengdu, 610065, People's Republic of China
| | - Yi Zhang
- School of Cyber Science and Engineering, Sichuan University, Chengdu, 610065, People's Republic of China
| |
Collapse
|
10
|
Yang Z, Chen M, Kazemimoghadam M, Ma L, Stojadinovic S, Wardak Z, Timmerman R, Dan T, Lu W, Gu X. Ensemble learning for glioma patients overall survival prediction using pre-operative MRIs. Phys Med Biol 2022; 67:10.1088/1361-6560/aca375. [PMID: 36384039 PMCID: PMC9990877 DOI: 10.1088/1361-6560/aca375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 11/16/2022] [Indexed: 11/18/2022]
Abstract
Objective: Gliomas are the most common primary brain tumors. Approximately 70% of the glioma patients diagnosed with glioblastoma have an averaged overall survival (OS) of only ∼16 months. Early survival prediction is essential for treatment decision-making in glioma patients. Here we proposed an ensemble learning approach to predict the post-operative OS of glioma patients using only pre-operative MRIs.Approach: Our dataset was from the Medical Image Computing and Computer Assisted Intervention Brain Tumor Segmentation challenge 2020, which consists of multimodal pre-operative MRI scans of 235 glioma patients with survival days recorded. The backbone of our approach was a Siamese network consisting of twinned ResNet-based feature extractors followed by a 3-layer classifier. During training, the feature extractors explored traits of intra and inter-class by minimizing contrastive loss of randomly paired 2D pre-operative MRIs, and the classifier utilized the extracted features to generate labels with cost defined by cross-entropy loss. During testing, the extracted features were also utilized to define distance between the test sample and the reference composed of training data, to generate an additional predictor via K-NN classification. The final label was the ensemble classification from both the Siamese model and the K-NN model.Main results: Our approach classifies the glioma patients into 3 OS classes: long-survivors (>15 months), mid-survivors (between 10 and 15 months) and short-survivors (<10 months). The performance is assessed by the accuracy (ACC) and the area under the curve (AUC) of 3-class classification. The final result achieved an ACC of 65.22% and AUC of 0.81.Significance: Our Siamese network based ensemble learning approach demonstrated promising ability in mining discriminative features with minimal manual processing and generalization requirement. This prediction strategy can be potentially applied to assist timely clinical decision-making.
Collapse
Affiliation(s)
- Zi Yang
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Mingli Chen
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Mahdieh Kazemimoghadam
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Lin Ma
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Strahinja Stojadinovic
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Zabi Wardak
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Robert Timmerman
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Tu Dan
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Weiguo Lu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Xuejun Gu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
- Department of Radiation Oncology, Stanford University, Palo Alto, CA 94305, USA
| |
Collapse
|
11
|
Mohammadi A, Mirza-Aghazadeh-Attari M, Faeghi F, Homayoun H, Abolghasemi J, Vogl TJ, Bureau NJ, Bakhshandeh M, Acharya RU, Abbasian Ardakani A. Tumor Microenvironment, Radiology, and Artificial Intelligence: Should We Consider Tumor Periphery? JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2022; 41:3079-3090. [PMID: 36000351 DOI: 10.1002/jum.16086] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 08/02/2022] [Accepted: 08/05/2022] [Indexed: 06/15/2023]
Abstract
OBJECTIVES The tumor microenvironment (TME) consists of cellular and noncellular components which enable the tumor to interact with its surroundings and plays an important role in the tumor progression and how the immune system reacts to the malignancy. In the present study, we investigate the diagnostic potential of the TME in differentiating benign and malignant lesions using image quantification and machine learning. METHODS A total of 229 breast lesions and 220 cervical lymph nodes were included in the study. A group of expert radiologists first performed medical imaging and segmented the lesions, after which a rectangular mask was drawn, encompassing all of the contouring. The mask was extended in each axis up to 50%, and 29 radiomics features were extracted from each mask. Radiomics features that showed a significant difference in each contour were used to develop a support vector machine (SVM) classifier for benign and malignant lesions in breast and lymph node images separately. RESULTS Single radiomics features extracted from extended contours outperformed radiologists' contours in both breast and lymph node lesions. Furthermore, when fed into the SVM model, the extended models also outperformed the radiologist's contour, achieving an area under the receiver operating characteristic curve of 0.887 and 0.970 in differentiating breast and lymph node lesions, respectively. CONCLUSIONS Our results provide convincing evidence regarding the importance of the tumor periphery and TME in medical imaging diagnosis. We propose that the immediate tumor periphery should be considered for differentiating benign and malignant lesions in image quantification studies.
Collapse
Affiliation(s)
- Afshin Mohammadi
- Department of Radiology, Faculty of Medicine, Urmia University of Medical Science, Urmia, Iran
| | | | - Fariborz Faeghi
- Department of Radiology Technology, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Hasan Homayoun
- Urology Research Center, Tehran University of Medical Sciences, Tehran, Iran
| | - Jamileh Abolghasemi
- Department of Biostatistics, School of Public Health, Iran University of Medical Sciences, Tehran, Iran
| | - Thomas J Vogl
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Nathalie J Bureau
- Department of Radiology, Centre Hospitalier de l'Université de Montréal, Montreal, Canada
| | - Mohsen Bakhshandeh
- Department of Radiology Technology, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Rajendra U Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| | - Ali Abbasian Ardakani
- Department of Radiology Technology, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|