1
|
Cornelissen S, Schouten SM, Langenhuizen PPJH, Lie ST, Kunst HPM, de With PHN, Verheul JB. Defining tumor growth in vestibular schwannomas: a volumetric inter-observer variability study in contrast-enhanced T1-weighted MRI. Neuroradiology 2024; 66:2033-2042. [PMID: 38980343 DOI: 10.1007/s00234-024-03416-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Accepted: 06/25/2024] [Indexed: 07/10/2024]
Abstract
PURPOSE For patients with vestibular schwannomas (VS), a conservative observational approach is increasingly used. Therefore, the need for accurate and reliable volumetric tumor monitoring is important. Currently, a volumetric cutoff of 20% increase in tumor volume is widely used to define tumor growth in VS. The study investigates the tumor volume dependency on the limits of agreement (LoA) for volumetric measurements of VS by means of an inter-observer study. METHODS This retrospective study included 100 VS patients who underwent contrast-enhanced T1-weighted MRI. Five observers volumetrically annotated the images. Observer agreement and reliability was measured using the LoA, estimated using the limits of agreement with the mean (LOAM) method, and the intraclass correlation coefficient (ICC). RESULTS The 100 patients had a median average tumor volume of 903 mm3 (IQR: 193-3101). Patients were divided into four volumetric size categories based on tumor volume quartile. The smallest tumor volume quartile showed a LOAM relative to the mean of 26.8% (95% CI: 23.7-33.6), whereas for the largest tumor volume quartile this figure was found to be 7.3% (95% CI: 6.5-9.7) and when excluding peritumoral cysts: 4.8% (95% CI: 4.2-6.2). CONCLUSION Agreement limits within volumetric annotation of VS are affected by tumor volume, since the LoA improves with increasing tumor volume. As a result, for tumors larger than 200 mm3, growth can reliably be detected at an earlier stage, compared to the currently widely used cutoff of 20%. However, for very small tumors, growth should be assessed with higher agreement limits than previously thought.
Collapse
Affiliation(s)
- Stefan Cornelissen
- Gamma Knife Center, Department of Neurosurgery, Elisabeth-TweeSteden Hospital, Tilburg, The Netherlands.
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Sammy M Schouten
- Gamma Knife Center, Department of Neurosurgery, Elisabeth-TweeSteden Hospital, Tilburg, The Netherlands
- Department of Otolaryngology, Radboud University Medical Center, Nijmegen, The Netherlands
- Department of Otolaryngology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Patrick P J H Langenhuizen
- Gamma Knife Center, Department of Neurosurgery, Elisabeth-TweeSteden Hospital, Tilburg, The Netherlands
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Suan Te Lie
- Gamma Knife Center, Department of Neurosurgery, Elisabeth-TweeSteden Hospital, Tilburg, The Netherlands
| | - Henricus P M Kunst
- Department of Otolaryngology, Radboud University Medical Center, Nijmegen, The Netherlands
- Department of Otolaryngology, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Peter H N de With
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Jeroen B Verheul
- Gamma Knife Center, Department of Neurosurgery, Elisabeth-TweeSteden Hospital, Tilburg, The Netherlands
| |
Collapse
|
2
|
Spinos D, Martinos A, Petsiou DP, Mistry N, Garas G. Artificial Intelligence in Temporal Bone Imaging: A Systematic Review. Laryngoscope 2024. [PMID: 39352072 DOI: 10.1002/lary.31809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 08/03/2024] [Accepted: 09/17/2024] [Indexed: 10/03/2024]
Abstract
OBJECTIVE The human temporal bone comprises more than 30 identifiable anatomical components. With the demand for precise image interpretation in this complex region, the utilization of artificial intelligence (AI) applications is steadily increasing. This systematic review aims to highlight the current role of AI in temporal bone imaging. DATA SOURCES A Systematic Review of English Publications searching MEDLINE (PubMed), COCHRANE Library, and EMBASE. REVIEW METHODS The search algorithm employed consisted of key items such as 'artificial intelligence,' 'machine learning,' 'deep learning,' 'neural network,' 'temporal bone,' and 'vestibular schwannoma.' Additionally, manual retrieval was conducted to capture any studies potentially missed in our initial search. All abstracts and full texts were screened based on our inclusion and exclusion criteria. RESULTS A total of 72 studies were included. 95.8% were retrospective and 88.9% were based on internal databases. Approximately two-thirds involved an AI-to-human comparison. Computed tomography (CT) was the imaging modality in 54.2% of the studies, with vestibular schwannoma (VS) being the most frequent study item (37.5%). Fifty-eight out of 72 articles employed neural networks, with 72.2% using various types of convolutional neural network models. Quality assessment of the included publications yielded a mean score of 13.6 ± 2.5 on a 20-point scale based on the CONSORT-AI extension. CONCLUSION Current research data highlight AI's potential in enhancing diagnostic accuracy with faster results and decreased performance errors compared to those of clinicians, thus improving patient care. However, the shortcomings of the existing research, often marked by heterogeneity and variable quality, underscore the need for more standardized methodological approaches to ensure the consistency and reliability of future data. LEVEL OF EVIDENCE NA Laryngoscope, 2024.
Collapse
Affiliation(s)
- Dimitrios Spinos
- South Warwickshire NHS Foundation Trust, Warwick, UK
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Anastasios Martinos
- National and Kapodistrian University of Athens School of Medicine, Athens, Greece
| | | | - Nina Mistry
- Gloucestershire Hospitals NHS Foundation Trust, ENT, Head and Neck Surgery, Gloucester, UK
| | - George Garas
- Surgical Innovation Centre, Department of Surgery and Cancer, Imperial College London, St. Mary's Hospital, London, UK
- Athens Medical Center, Marousi & Psychiko Clinic, Athens, Greece
| |
Collapse
|
3
|
Łajczak P, Matyja J, Jóźwik K, Nawrat Z. Accuracy of vestibular schwannoma segmentation using deep learning models - a systematic review & meta-analysis. Neuroradiology 2024:10.1007/s00234-024-03449-1. [PMID: 39179652 DOI: 10.1007/s00234-024-03449-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Accepted: 08/09/2024] [Indexed: 08/26/2024]
Abstract
Vestibular Schwannoma (VS) is a rare tumor with varied incidence rates, predominantly affecting the 60-69 age group. In the era of artificial intelligence (AI), deep learning (DL) algorithms show promise in automating diagnosis. However, a knowledge gap exists in the automated segmentation of VS using DL. To address this gap, this meta-analysis aims to provide insights into the current state of DL algorithms applied to MR images of VS. METHODOLOGY Following 2020 PRISMA guidelines, a search across four databases was conducted. Inclusion criteria focused on articles using DL for VS MR image segmentation. The primary metric was the Dice score, supplemented by relative volume error (RVE) and average symmetric surface distance (ASSD). RESULTS The search process identified 752 articles, leading to 11 studies for meta-analysis. A QUADAS- 2 analysis revealed varying biases. The overall Dice score for 56 models was 0.89 (CI: 0.88-0.90), with high heterogeneity (I2 = 95.9%). Subgroup analyses based on DL architecture, MRI inputs, and testing set sizes revealed performance variations. 2.5D DL networks demonstrated comparable efficacy to 3D networks. Imaging input analyses highlighted the superiority of contrast-enhanced T1-weighted imaging and mixed MRI inputs. DISCUSSION This study fills a gap in systematic review in the automated segmentation of VS using DL techniques. Despite promising results, limitations include publication bias and high heterogeneity. Future research should focus on standardized designs, larger testing sets, and addressing biases for more reliable results. DL have promising efficacy in VS diagnosis, however further validation and standardization is needed. CONCLUSION In conclusion, this meta-analysis provides comprehensive review into the current landscape of automated VS segmentation using DL. The high Dice score indicates promising agreement in segmentation, yet challenges like bias and heterogeneity must be addressed in the future research.
Collapse
Affiliation(s)
- Paweł Łajczak
- Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia in Katowice, Jordana 18, Mekelweg 5, Zabrze, 40-043,, Poland.
| | - Jakub Matyja
- TU Delft, Mekelweg 5,, Delft 2628 CD,, Netherlands
| | - Kamil Jóźwik
- Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia in Katowice, Jordana 18, Mekelweg 5, Zabrze, 40-043,, Poland
| | - Zbigniew Nawrat
- Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia in Katowice, Jordana 18, Mekelweg 5, Zabrze, 40-043,, Poland
- Foundation of Cardiac Surgery Development, Zabrze, 41-808, Poland
| |
Collapse
|
4
|
Nernekli K, Persad AR, Hori YS, Yener U, Celtikci E, Sahin MC, Sozer A, Sozer B, Park DJ, Chang SD. Automatic Segmentation of Vestibular Schwannomas: A Systematic Review. World Neurosurg 2024; 188:35-44. [PMID: 38685346 DOI: 10.1016/j.wneu.2024.04.145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 04/23/2024] [Indexed: 05/02/2024]
Abstract
BACKGROUND Vestibular schwannomas (VSs) are benign tumors often monitored over time, with measurement techniques for assessing growth rates subject to significant interobserver variability. Automatic segmentation of these tumors could provide a more reliable and efficient for tracking their progression, especially given the irregular shape and growth patterns of VS. METHODS Various studies and segmentation techniques employing different Convolutional Neural Network architectures and models, such as U-Net and convolutional-attention transformer segmentation, were analyzed. Models were evaluated based on their performance across diverse datasets, and challenges, including domain shift and data sharing, were scrutinized. RESULTS Automatic segmentation methods offer a promising alternative to conventional measurement techniques, offering potential benefits in precision and efficiency. However, these methods are not without challenges, notably the "domain shift" that occurs when models trained on specific datasets underperform when applied to different datasets. Techniques such as domain adaptation, domain generalization, and data diversity were discussed as potential solutions. CONCLUSIONS Accurate measurement of VS growth is a complex process, with volumetric analysis currently appearing more reliable than linear measurements. Automatic segmentation, despite its challenges, offers a promising avenue for future investigation. Robust well-generalized models could potentially improve the efficiency of tracking tumor growth, thereby augmenting clinical decision-making. Further work needs to be done to develop more robust models, address the domain shift, and enable secure data sharing for wider applicability.
Collapse
Affiliation(s)
- Kerem Nernekli
- Department of Radiology, Stanford University School of Medicine, Stanford, California, USA
| | - Amit R Persad
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, California, USA
| | - Yusuke S Hori
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, California, USA
| | - Ulas Yener
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, California, USA
| | - Emrah Celtikci
- Department of Neurosurgery, Gazi University, Ankara, Turkey
| | | | - Alperen Sozer
- Department of Neurosurgery, Gazi University, Ankara, Turkey
| | - Batuhan Sozer
- Department of Neurosurgery, Gazi University, Ankara, Turkey
| | - David J Park
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, California, USA.
| | - Steven D Chang
- Department of Neurosurgery, Stanford University School of Medicine, Stanford, California, USA
| |
Collapse
|
5
|
Alsaleh H. The impact of artificial intelligence in the diagnosis and management of acoustic neuroma: A systematic review. Technol Health Care 2024:THC232043. [PMID: 39093085 DOI: 10.3233/thc-232043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/04/2024]
Abstract
BACKGROUND Schwann cell sheaths are the source of benign, slowly expanding tumours known as acoustic neuromas (AN). The diagnostic and treatment approaches for AN must be patient-centered, taking into account unique factors and preferences. OBJECTIVE The purpose of this study is to investigate how machine learning and artificial intelligence (AI) can revolutionise AN management and diagnostic procedures. METHODS A thorough systematic review that included peer-reviewed material from public databases was carried out. Publications on AN, AI, and deep learning up until December 2023 were included in the review's purview. RESULTS Based on our analysis, AI models for volume estimation, segmentation, tumour type differentiation, and separation from healthy tissues have been developed successfully. Developments in computational biology imply that AI can be used effectively in a variety of fields, including quality of life evaluations, monitoring, robotic-assisted surgery, feature extraction, radiomics, image analysis, clinical decision support systems, and treatment planning. CONCLUSION For better AN diagnosis and treatment, a variety of imaging modalities require the development of strong, flexible AI models that can handle heterogeneous imaging data. Subsequent investigations ought to concentrate on reproducing findings in order to standardise AI approaches, which could transform their use in medical environments.
Collapse
|
6
|
de Araújo AS, Pinho MS, Marques da Silva AM, Fiorentini LF, Becker J. A 2.5D Self-Training Strategy for Carotid Artery Segmentation in T1-Weighted Brain Magnetic Resonance Images. J Imaging 2024; 10:161. [PMID: 39057732 PMCID: PMC11278143 DOI: 10.3390/jimaging10070161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2024] [Revised: 06/25/2024] [Accepted: 06/28/2024] [Indexed: 07/28/2024] Open
Abstract
Precise annotations for large medical image datasets can be time-consuming. Additionally, when dealing with volumetric regions of interest, it is typical to apply segmentation techniques on 2D slices, compromising important information for accurately segmenting 3D structures. This study presents a deep learning pipeline that simultaneously tackles both challenges. Firstly, to streamline the annotation process, we employ a semi-automatic segmentation approach using bounding boxes as masks, which is less time-consuming than pixel-level delineation. Subsequently, recursive self-training is utilized to enhance annotation quality. Finally, a 2.5D segmentation technique is adopted, wherein a slice of a volumetric image is segmented using a pseudo-RGB image. The pipeline was applied to segment the carotid artery tree in T1-weighted brain magnetic resonance images. Utilizing 42 volumetric non-contrast T1-weighted brain scans from four datasets, we delineated bounding boxes around the carotid arteries in the axial slices. Pseudo-RGB images were generated from these slices, and recursive segmentation was conducted using a Res-Unet-based neural network architecture. The model's performance was tested on a separate dataset, with ground truth annotations provided by a radiologist. After recursive training, we achieved an Intersection over Union (IoU) score of (0.68 ± 0.08) on the unseen dataset, demonstrating commendable qualitative results.
Collapse
Affiliation(s)
- Adriel Silva de Araújo
- School of Technology, Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre 90619-900, Brazil;
| | - Márcio Sarroglia Pinho
- School of Technology, Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre 90619-900, Brazil;
| | | | - Luis Felipe Fiorentini
- Centro de Diagnóstico por Imagem, Santa Casa de Misericórdia de Porto Alegre, Porto Alegre 90020-090, Brazil
- Grupo Hospitalar Conceição, Porto Alegre 91350-200, Brazil
| | - Jefferson Becker
- Hospital São Lucas, Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre 90610-000, Brazil
- Brain Institute, Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre 90619-900, Brazil
| |
Collapse
|
7
|
Heman-Ackah SM, Blue R, Quimby AE, Abdallah H, Sweeney EM, Chauhan D, Hwa T, Brant J, Ruckenstein MJ, Bigelow DC, Jackson C, Zenonos G, Gardner P, Briggs SE, Cohen Y, Lee JYK. A multi-institutional machine learning algorithm for prognosticating facial nerve injury following microsurgical resection of vestibular schwannoma. Sci Rep 2024; 14:12963. [PMID: 38839778 PMCID: PMC11153496 DOI: 10.1038/s41598-024-63161-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Accepted: 05/26/2024] [Indexed: 06/07/2024] Open
Abstract
Vestibular schwannomas (VS) are the most common tumor of the skull base with available treatment options that carry a risk of iatrogenic injury to the facial nerve, which can significantly impact patients' quality of life. As facial nerve outcomes remain challenging to prognosticate, we endeavored to utilize machine learning to decipher predictive factors relevant to facial nerve outcomes following microsurgical resection of VS. A database of patient-, tumor- and surgery-specific features was constructed via retrospective chart review of 242 consecutive patients who underwent microsurgical resection of VS over a 7-year study period. This database was then used to train non-linear supervised machine learning classifiers to predict facial nerve preservation, defined as House-Brackmann (HB) I vs. facial nerve injury, defined as HB II-VI, as determined at 6-month outpatient follow-up. A random forest algorithm demonstrated 90.5% accuracy, 90% sensitivity and 90% specificity in facial nerve injury prognostication. A random variable (rv) was generated by randomly sampling a Gaussian distribution and used as a benchmark to compare the predictiveness of other features. This analysis revealed age, body mass index (BMI), case length and the tumor dimension representing tumor growth towards the brainstem as prognosticators of facial nerve injury. When validated via prospective assessment of facial nerve injury risk, this model demonstrated 84% accuracy. Here, we describe the development of a machine learning algorithm to predict the likelihood of facial nerve injury following microsurgical resection of VS. In addition to serving as a clinically applicable tool, this highlights the potential of machine learning to reveal non-linear relationships between variables which may have clinical value in prognostication of outcomes for high-risk surgical procedures.
Collapse
Affiliation(s)
- Sabrina M Heman-Ackah
- Department of Neurosurgery, Perelman Center for Advanced Medicine, University of Pennsylvania, 3400 Civic Center Boulevard, 15th Floor, Philadelphia, PA, 19104, USA.
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA.
| | - Rachel Blue
- Department of Neurosurgery, Perelman Center for Advanced Medicine, University of Pennsylvania, 3400 Civic Center Boulevard, 15th Floor, Philadelphia, PA, 19104, USA
| | - Alexandra E Quimby
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
- Department of Otolaryngology and Communication Sciences, SUNY Upstate Medical University Hospital, Syracuse, NY, USA
| | - Hussein Abdallah
- School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| | - Elizabeth M Sweeney
- Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania, Philadelphia, PA, USA
| | - Daksh Chauhan
- University of Pennsylvania, Perelman School of Medicine, Philadelphia, PA, USA
| | - Tiffany Hwa
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Jason Brant
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
- Corporal Michael J. Crescenz VAMC, Philadelphia, PA, USA
| | - Michael J Ruckenstein
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Douglas C Bigelow
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Christina Jackson
- Department of Neurosurgery, Perelman Center for Advanced Medicine, University of Pennsylvania, 3400 Civic Center Boulevard, 15th Floor, Philadelphia, PA, 19104, USA
| | - Georgios Zenonos
- Center for Cranial Base Surgery, University of Pittsburgh, Pittsburgh, PA, USA
| | - Paul Gardner
- Center for Cranial Base Surgery, University of Pittsburgh, Pittsburgh, PA, USA
| | - Selena E Briggs
- Department of Otolaryngology, MedStar Washington Hospital Center, Washington, DC, USA
- Department of Otolaryngology, Georgetown University, Washington, DC, USA
| | - Yale Cohen
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
- University of Pennsylvania, Perelman School of Medicine, Philadelphia, PA, USA
| | - John Y K Lee
- Department of Neurosurgery, Perelman Center for Advanced Medicine, University of Pennsylvania, 3400 Civic Center Boulevard, 15th Floor, Philadelphia, PA, 19104, USA
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
8
|
Xu X, Du L, Yin D. Dual-branch feature fusion S3D V-Net network for lung nodules segmentation. J Appl Clin Med Phys 2024; 25:e14331. [PMID: 38478388 PMCID: PMC11163502 DOI: 10.1002/acm2.14331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 02/01/2024] [Accepted: 03/04/2024] [Indexed: 06/11/2024] Open
Abstract
BACKGROUND Accurate segmentation of lung nodules can help doctors get more accurate results and protocols in early lung cancer diagnosis and treatment planning, so that patients can be better detected and treated at an early stage, and the mortality rate of lung cancer can be reduced. PURPOSE Currently, the improvement of lung nodule segmentation accuracy has been limited by his heterogeneous performance in the lungs, the imbalance between segmentation targets and background pixels, and other factors. We propose a new 2.5D lung nodule segmentation network model for lung nodule segmentation. This network model can well improve the extraction of edge information of lung nodules, and fuses intra-slice and inter-slice features, which makes good use of the three-dimensional structural information of lung nodules and can more effectively improve the accuracy of lung nodule segmentation. METHODS Our approach is based on a typical encoding-decoding network structure for improvement. The improved model captures the features of multiple nodules in both 3-D and 2-D CT images, complements the information of the segmentation target's features and enhances the texture features at the edges of the pulmonary nodules through the dual-branch feature fusion module (DFFM) and the reverse attention context module (RACM), and employs central pooling instead of the maximal pooling operation, which is used to preserve the features around the target and to eliminate the edge-irrelevant features, to further improve the performance of the segmentation of the pulmonary nodules. RESULTS We evaluated this method on a wide range of 1186 nodules from the LUNA16 dataset, and averaging the results of ten cross-validated, the proposed method achieved the mean dice similarity coefficient (mDSC) of 84.57%, the mean overlapping error (mOE) of 18.73% and average processing of a case is about 2.07 s. Moreover, our results were compared with inter-radiologist agreement on the LUNA16 dataset, and the average difference was 0.74%. CONCLUSION The experimental results show that our method improves the accuracy of pulmonary nodules segmentation and also takes less time than more 3-D segmentation methods in terms of time.
Collapse
Affiliation(s)
- Xiaoru Xu
- School of Automation and Information EngineeringSichuan University of Science and EngineeringZigongPeople's Republic of China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & EngineeringZigongPeople's Republic of China
| | - Lingyan Du
- School of Automation and Information EngineeringSichuan University of Science and EngineeringZigongPeople's Republic of China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & EngineeringZigongPeople's Republic of China
| | - Dongsheng Yin
- School of Automation and Information EngineeringSichuan University of Science and EngineeringZigongPeople's Republic of China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & EngineeringZigongPeople's Republic of China
| |
Collapse
|
9
|
Kujawa A, Dorent R, Connor S, Thomson S, Ivory M, Vahedi A, Guilhem E, Wijethilake N, Bradford R, Kitchen N, Bisdas S, Ourselin S, Vercauteren T, Shapey J. Deep learning for automatic segmentation of vestibular schwannoma: a retrospective study from multi-center routine MRI. Front Comput Neurosci 2024; 18:1365727. [PMID: 38784680 PMCID: PMC11111906 DOI: 10.3389/fncom.2024.1365727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 04/17/2024] [Indexed: 05/25/2024] Open
Abstract
Automatic segmentation of vestibular schwannoma (VS) from routine clinical MRI has potential to improve clinical workflow, facilitate treatment decisions, and assist patient management. Previous work demonstrated reliable automatic segmentation performance on datasets of standardized MRI images acquired for stereotactic surgery planning. However, diagnostic clinical datasets are generally more diverse and pose a larger challenge to automatic segmentation algorithms, especially when post-operative images are included. In this work, we show for the first time that automatic segmentation of VS on routine MRI datasets is also possible with high accuracy. We acquired and publicly release a curated multi-center routine clinical (MC-RC) dataset of 160 patients with a single sporadic VS. For each patient up to three longitudinal MRI exams with contrast-enhanced T1-weighted (ceT1w) (n = 124) and T2-weighted (T2w) (n = 363) images were included and the VS manually annotated. Segmentations were produced and verified in an iterative process: (1) initial segmentations by a specialized company; (2) review by one of three trained radiologists; and (3) validation by an expert team. Inter- and intra-observer reliability experiments were performed on a subset of the dataset. A state-of-the-art deep learning framework was used to train segmentation models for VS. Model performance was evaluated on a MC-RC hold-out testing set, another public VS datasets, and a partially public dataset. The generalizability and robustness of the VS deep learning segmentation models increased significantly when trained on the MC-RC dataset. Dice similarity coefficients (DSC) achieved by our model are comparable to those achieved by trained radiologists in the inter-observer experiment. On the MC-RC testing set, median DSCs were 86.2(9.5) for ceT1w, 89.4(7.0) for T2w, and 86.4(8.6) for combined ceT1w+T2w input images. On another public dataset acquired for Gamma Knife stereotactic radiosurgery our model achieved median DSCs of 95.3(2.9), 92.8(3.8), and 95.5(3.3), respectively. In contrast, models trained on the Gamma Knife dataset did not generalize well as illustrated by significant underperformance on the MC-RC routine MRI dataset, highlighting the importance of data variability in the development of robust VS segmentation models. The MC-RC dataset and all trained deep learning models were made available online.
Collapse
Affiliation(s)
- Aaron Kujawa
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Reuben Dorent
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Steve Connor
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Department of Neuroradiology, King's College Hospital, London, United Kingdom
- Department of Radiology, Guy's and St Thomas' Hospital, London, United Kingdom
| | - Suki Thomson
- Department of Neuroradiology, King's College Hospital, London, United Kingdom
| | - Marina Ivory
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Ali Vahedi
- Department of Neuroradiology, King's College Hospital, London, United Kingdom
| | - Emily Guilhem
- Department of Neuroradiology, King's College Hospital, London, United Kingdom
| | - Navodini Wijethilake
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Robert Bradford
- Queen Square Radiosurgery Centre (Gamma Knife), National Hospital for Neurology and Neurosurgery, London, United Kingdom
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Neil Kitchen
- Queen Square Radiosurgery Centre (Gamma Knife), National Hospital for Neurology and Neurosurgery, London, United Kingdom
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Sotirios Bisdas
- Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Jonathan Shapey
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Department of Neurosurgery, King's College Hospital, London, United Kingdom
| |
Collapse
|
10
|
Huang Y, Yang X, Liu L, Zhou H, Chang A, Zhou X, Chen R, Yu J, Chen J, Chen C, Liu S, Chi H, Hu X, Yue K, Li L, Grau V, Fan DP, Dong F, Ni D. Segment anything model for medical images? Med Image Anal 2024; 92:103061. [PMID: 38086235 DOI: 10.1016/j.media.2023.103061] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 09/28/2023] [Accepted: 12/05/2023] [Indexed: 01/12/2024]
Abstract
The Segment Anything Model (SAM) is the first foundation model for general image segmentation. It has achieved impressive results on various natural image segmentation tasks. However, medical image segmentation (MIS) is more challenging because of the complex modalities, fine anatomical structures, uncertain and complex object boundaries, and wide-range object scales. To fully validate SAM's performance on medical data, we collected and sorted 53 open-source datasets and built a large medical segmentation dataset with 18 modalities, 84 objects, 125 object-modality paired targets, 1050K 2D images, and 6033K masks. We comprehensively analyzed different models and strategies on the so-called COSMOS 1050K dataset. Our findings mainly include the following: (1) SAM showed remarkable performance in some specific objects but was unstable, imperfect, or even totally failed in other situations. (2) SAM with the large ViT-H showed better overall performance than that with the small ViT-B. (3) SAM performed better with manual hints, especially box, than the Everything mode. (4) SAM could help human annotation with high labeling quality and less time. (5) SAM was sensitive to the randomness in the center point and tight box prompts, and may suffer from a serious performance drop. (6) SAM performed better than interactive methods with one or a few points, but will be outpaced as the number of points increases. (7) SAM's performance correlated to different factors, including boundary complexity, intensity differences, etc. (8) Finetuning the SAM on specific medical tasks could improve its average DICE performance by 4.39% and 6.68% for ViT-B and ViT-H, respectively. Codes and models are available at: https://github.com/yuhoo0302/Segment-Anything-Model-for-Medical-Images. We hope that this comprehensive report can help researchers explore the potential of SAM applications in MIS, and guide how to appropriately use and develop SAM.
Collapse
Affiliation(s)
- Yuhao Huang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Xin Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Lian Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Han Zhou
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Ao Chang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Xinrui Zhou
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Rusi Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Junxuan Yu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Jiongquan Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Chaoyu Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Sijing Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | | | - Xindi Hu
- Shenzhen RayShape Medical Technology Co., Ltd, Shenzhen, China
| | - Kejuan Yue
- Hunan First Normal University, Changsha, China
| | - Lei Li
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Vicente Grau
- Department of Engineering Science, University of Oxford, Oxford, UK
| | - Deng-Ping Fan
- Computer Vision Lab (CVL), ETH Zurich, Zurich, Switzerland
| | - Fajin Dong
- Ultrasound Department, the Second Clinical Medical College, Jinan University, China; First Affiliated Hospital, Southern University of Science and Technology, Shenzhen People's Hospital, Shenzhen, China.
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China.
| |
Collapse
|
11
|
Kawamura M, Kamomae T, Yanagawa M, Kamagata K, Fujita S, Ueda D, Matsui Y, Fushimi Y, Fujioka T, Nozaki T, Yamada A, Hirata K, Ito R, Fujima N, Tatsugami F, Nakaura T, Tsuboyama T, Naganawa S. Revolutionizing radiation therapy: the role of AI in clinical practice. JOURNAL OF RADIATION RESEARCH 2024; 65:1-9. [PMID: 37996085 PMCID: PMC10803173 DOI: 10.1093/jrr/rrad090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 09/25/2023] [Accepted: 10/16/2023] [Indexed: 11/25/2023]
Abstract
This review provides an overview of the application of artificial intelligence (AI) in radiation therapy (RT) from a radiation oncologist's perspective. Over the years, advances in diagnostic imaging have significantly improved the efficiency and effectiveness of radiotherapy. The introduction of AI has further optimized the segmentation of tumors and organs at risk, thereby saving considerable time for radiation oncologists. AI has also been utilized in treatment planning and optimization, reducing the planning time from several days to minutes or even seconds. Knowledge-based treatment planning and deep learning techniques have been employed to produce treatment plans comparable to those generated by humans. Additionally, AI has potential applications in quality control and assurance of treatment plans, optimization of image-guided RT and monitoring of mobile tumors during treatment. Prognostic evaluation and prediction using AI have been increasingly explored, with radiomics being a prominent area of research. The future of AI in radiation oncology offers the potential to establish treatment standardization by minimizing inter-observer differences in segmentation and improving dose adequacy evaluation. RT standardization through AI may have global implications, providing world-standard treatment even in resource-limited settings. However, there are challenges in accumulating big data, including patient background information and correlating treatment plans with disease outcomes. Although challenges remain, ongoing research and the integration of AI technology hold promise for further advancements in radiation oncology.
Collapse
Affiliation(s)
- Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Takeshi Kamomae
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, 565-0871, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, 2-1-1 Hongo, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Shohei Fujita
- Department of Radiology, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata-cho, Kitaku, Okayama, 700-8558, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawaharacho, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-8582, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, 3-1-1 Asahi, Matsumoto, Nagano, 390-8621, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Faculty of Medicine, Hokkaido University, Kita15, Nishi7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Kita15, Nishi7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, 1-1-1 Honjo, Chuo-ku, Kumamoto, 860-8556, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, 565-0871, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumaicho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| |
Collapse
|
12
|
Yang H, Tan T, Tegzes P, Dong X, Tamada R, Ferenczi L, Avinash G. Light mixed-supervised segmentation for 3D medical image data. Med Phys 2024; 51:167-178. [PMID: 37909833 DOI: 10.1002/mp.16816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 10/03/2023] [Accepted: 10/16/2023] [Indexed: 11/03/2023] Open
Abstract
BACKGROUND Accurate 3D semantic segmentation models are essential for many clinical applications. To train a model for 3D segmentation, voxel-level annotation is necessary, which is expensive to obtain due to laborious work and privacy protection. To accurately annotate 3D medical data, such as MRI, a common practice is to annotate the volumetric data in a slice-by-slice contouring way along principal axes. PURPOSE In order to reduce the annotation effort in slices, weakly supervised learning with a bounding box (Bbox) was proposed to leverage the discriminating information via a tightness prior assumption. Nevertheless, this method requests accurate and tight Bboxes, which will significantly drop the performance when tightness is not held, that is when a relaxed Bbox is applied. Therefore, there is a need to train a stable model based on relaxed Bbox annotation. METHODS This paper presents a mixed-supervised training strategy to reduce the annotation effort for 3D segmentation tasks. In the proposed approach, a fully annotated contour is only required for a single slice of the volume. In contrast, the rest of the slices with targets are annotated with relaxed Bboxes. This mixed-supervised method adopts fully supervised learning, relaxed Bbox prior, and contrastive learning during the training, which ensures the network exploits the discriminative information of the training volumes properly. The proposed method was evaluated on two public 3D medical imaging datasets (MRI prostate dataset and Vestibular Schwannoma [VS] dataset). RESULTS The proposed method obtained a high segmentation Dice score of 85.3% on an MRI prostate dataset and 83.3% on a VS dataset with relaxed Bbox annotation, which are close to a fully supervised model. Moreover, with the same relaxed Bbox annotations, the proposed method outperforms the state-of-the-art methods. More importantly, the model performance is stable when the accuracy of Bbox annotation varies. CONCLUSIONS The presented study proposes a method based on a mixed-supervised learning method in 3D medical imaging. The benefit will be stable segmentation of the target in 3D images with low accurate annotation requirement, which leads to easier model training on large-scale datasets.
Collapse
Affiliation(s)
| | - Tao Tan
- GE Healthcare, Eindhoven, The Netherlands
| | | | | | | | | | | |
Collapse
|
13
|
Andrade-Miranda G, Jaouen V, Tankyevych O, Cheze Le Rest C, Visvikis D, Conze PH. Multi-modal medical Transformers: A meta-analysis for medical image segmentation in oncology. Comput Med Imaging Graph 2023; 110:102308. [PMID: 37918328 DOI: 10.1016/j.compmedimag.2023.102308] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 10/05/2023] [Accepted: 10/24/2023] [Indexed: 11/04/2023]
Abstract
Multi-modal medical image segmentation is a crucial task in oncology that enables the precise localization and quantification of tumors. The aim of this work is to present a meta-analysis of the use of multi-modal medical Transformers for medical image segmentation in oncology, specifically focusing on multi-parametric MR brain tumor segmentation (BraTS2021), and head and neck tumor segmentation using PET-CT images (HECKTOR2021). The multi-modal medical Transformer architectures presented in this work exploit the idea of modality interaction schemes based on visio-linguistic representations: (i) single-stream, where modalities are jointly processed by one Transformer encoder, and (ii) multiple-stream, where the inputs are encoded separately before being jointly modeled. A total of fourteen multi-modal architectures are evaluated using different ranking strategies based on dice similarity coefficient (DSC) and average symmetric surface distance (ASSD) metrics. In addition, cost indicators such as the number of trainable parameters and the number of multiply-accumulate operations (MACs) are reported. The results demonstrate that multi-path hybrid CNN-Transformer-based models improve segmentation accuracy when compared to traditional methods, but come at the cost of increased computation time and potentially larger model size.
Collapse
Affiliation(s)
| | - Vincent Jaouen
- LaTIM UMR 1101, Inserm, Brest, France; IMT Atlantique, Brest, France.
| | - Olena Tankyevych
- LaTIM UMR 1101, Inserm, Brest, France; Nuclear Medicine, University Hospital of Poitiers, Poitiers, France.
| | - Catherine Cheze Le Rest
- LaTIM UMR 1101, Inserm, Brest, France; Nuclear Medicine, University Hospital of Poitiers, Poitiers, France.
| | | | | |
Collapse
|
14
|
Wendler T, Kreissl MC, Schemmer B, Rogasch JMM, De Benetti F. Artificial Intelligence-powered automatic volume calculation in medical images - available tools, performance and challenges for nuclear medicine. Nuklearmedizin 2023; 62:343-353. [PMID: 37995707 PMCID: PMC10667065 DOI: 10.1055/a-2200-2145] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 10/26/2023] [Indexed: 11/25/2023]
Abstract
Volumetry is crucial in oncology and endocrinology, for diagnosis, treatment planning, and evaluating response to therapy for several diseases. The integration of Artificial Intelligence (AI) and Deep Learning (DL) has significantly accelerated the automatization of volumetric calculations, enhancing accuracy and reducing variability and labor. In this review, we show that a high correlation has been observed between Machine Learning (ML) methods and expert assessments in tumor volumetry; Yet, it is recognized as more challenging than organ volumetry. Liver volumetry has shown progression in accuracy with a decrease in error. If a relative error below 10 % is acceptable, ML-based liver volumetry can be considered reliable for standardized imaging protocols if used in patients without major anomalies. Similarly, ML-supported automatic kidney volumetry has also shown consistency and reliability in volumetric calculations. In contrast, AI-supported thyroid volumetry has not been extensively developed, despite initial works in 3D ultrasound showing promising results in terms of accuracy and reproducibility. Despite the advancements presented in the reviewed literature, the lack of standardization limits the generalizability of ML methods across diverse scenarios. The domain gap, i. e., the difference in probability distribution of training and inference data, is of paramount importance before clinical deployment of AI, to maintain accuracy and reliability in patient care. The increasing availability of improved segmentation tools is expected to further incorporate AI methods into routine workflows where volumetry will play a more prominent role in radionuclide therapy planning and quantitative follow-up of disease evolution.
Collapse
Affiliation(s)
- Thomas Wendler
- Clinical Computational Medical Imaging Research, Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Augsburg, Germany
- Institute of Digital Medicine, Universitätsklinikum Augsburg, Germany
- Computer-Aided Medical Procedures and Augmented Reality School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
| | | | | | - Julian Manuel Michael Rogasch
- Department of Nuclear Medicine, Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin,Germany
| | - Francesca De Benetti
- Computer-Aided Medical Procedures and Augmented Reality School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
| |
Collapse
|
15
|
Neve OM, Romeijn SR, Chen Y, Nagtegaal L, Grootjans W, Jansen JC, Staring M, Verbist BM, Hensen EF. Automated 2-Dimensional Measurement of Vestibular Schwannoma: Validity and Accuracy of an Artificial Intelligence Algorithm. Otolaryngol Head Neck Surg 2023; 169:1582-1589. [PMID: 37555251 DOI: 10.1002/ohn.470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 06/12/2023] [Accepted: 07/11/2023] [Indexed: 08/10/2023]
Abstract
OBJECTIVE Validation of automated 2-dimensional (2D) diameter measurements of vestibular schwannomas on magnetic resonance imaging (MRI). STUDY DESIGN Retrospective validation study using 2 data sets containing MRIs of vestibular schwannoma patients. SETTING University Hospital in The Netherlands. METHODS Two data sets were used, 1 containing 1 scan per patient (n = 134) and the other containing at least 3 consecutive MRIs of 51 patients, all with contrast-enhanced T1 or high-resolution T2 sequences. 2D measurements of the maximal extrameatal diameters in the axial plane were automatically derived from a 3D-convolutional neural network compared to manual measurements by 2 human observers. Intra- and interobserver variabilities were calculated using the intraclass correlation coefficient (ICC), agreement on tumor progression using Cohen's kappa. RESULTS The human intra- and interobserver variability showed a high correlation (ICC: 0.98-0.99) and limits of agreement of 1.7 to 2.1 mm. Comparing the automated to human measurements resulted in ICC of 0.98 (95% confidence interval [CI]: 0.974; 0.987) and 0.97 (95% CI: 0.968; 0.984), with limits of agreement of 2.2 and 2.1 mm for diameters parallel and perpendicular to the posterior side of the temporal bone, respectively. There was satisfactory agreement on tumor progression between automated measurements and human observers (Cohen's κ = 0.77), better than the agreement between the human observers (Cohen's κ = 0.74). CONCLUSION Automated 2D diameter measurements and growth detection of vestibular schwannomas are at least as accurate as human 2D measurements. In clinical practice, measurements of the maximal extrameatal tumor (2D) diameters of vestibular schwannomas provide important complementary information to total tumor volume (3D) measurements. Combining both in an automated measurement algorithm facilitates clinical adoption.
Collapse
Affiliation(s)
- Olaf M Neve
- Department of Otorhinolaryngology-Head and Neck Surgery, Leiden University Medical Center, Leiden, The Netherlands
| | - Stephan R Romeijn
- Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Yunjie Chen
- Department of Radiology, Division of Image Processing, Leiden University Medical Center, Leiden, The Netherlands
| | - Larissa Nagtegaal
- Department of Otorhinolaryngology-Head and Neck Surgery, Leiden University Medical Center, Leiden, The Netherlands
- Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Willem Grootjans
- Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Jeroen C Jansen
- Department of Otorhinolaryngology-Head and Neck Surgery, Leiden University Medical Center, Leiden, The Netherlands
| | - Marius Staring
- Department of Radiology, Division of Image Processing, Leiden University Medical Center, Leiden, The Netherlands
| | - Berit M Verbist
- Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Erik F Hensen
- Department of Otorhinolaryngology-Head and Neck Surgery, Leiden University Medical Center, Leiden, The Netherlands
| |
Collapse
|
16
|
Dayawansa S, Abbas SO, Mantziaris G, Dumot C, Donahue JH, Sheehan JP. Volumetric Assessment of Nonfunctional Pituitary Adenoma Treated With Stereotactic Radiosurgery: An Assessment of Long-Term Response. Neurosurgery 2023; 93:1339-1345. [PMID: 37437306 DOI: 10.1227/neu.0000000000002594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 05/10/2023] [Indexed: 07/14/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Stereotactic radiosurgery (SRS) is widely used to manage recurrent or residual nonfunctioning pituitary adenomas (NFPAs). Studies on the long-term volumetric response of NFPAs to SRS are lacking. Such a post-SRS volumetric study will allow us to set up appropriate radiographic follow-up protocols and predict tumor volumetric response. METHODS Two providers independently performed volumetric analyses on 54 patients who underwent single-session SRS for a recurrent/residual NFPA. In the case of discrepancy between their results, the final volume was confirmed by an independent third provider. Volumetry was performed on the 1-, 3-, 5-, 7-, and 10-year follow-up neuroimaging studies. RESULTS Most patients showed a favorable volumetric response, with 87% (47/54) showing tumor regression and 13% (7/54) showing tumor stability at 10 years. Year 3 post-SRS volumetric results correlated (R 2 = 0.82, 0.63, 0.56) with 5-, 7-, and 10-year outcomes. The mean interval volumetric reduction was 17% on year 1; further interval volumetric reduction was 17%, 9%, 4%, and 9% on years 3, 5, 7, and 10, respectively. CONCLUSION Year 3 post-SRS volumetric response of patients with residual or recurrent NFPAs is predictive of their 7-10-year follow-up response. For patients demonstrating NFPA regression in the first 1-3 years, interval follow-up MRI's can likely be performed at 2-year periods unless otherwise clinically indicated. Further studies are needed to better define the volumetric response to adenomas more than a decade after SRS.
Collapse
Affiliation(s)
- Sam Dayawansa
- Department of Neurological Surgery, University of Virginia, Charlottesville , Virginia , USA
| | - Salma O Abbas
- Department of Radiology, University of Virginia, Charlottesville , Virginia , USA
| | - Georgios Mantziaris
- Department of Neurological Surgery, University of Virginia, Charlottesville , Virginia , USA
| | - Chloe Dumot
- Department of Neurological Surgery, University of Virginia, Charlottesville , Virginia , USA
| | - Joseph H Donahue
- Department of Radiology, University of Virginia, Charlottesville , Virginia , USA
| | - Jason P Sheehan
- Department of Neurological Surgery, University of Virginia, Charlottesville , Virginia , USA
| |
Collapse
|
17
|
Neves CA, Liu GS, El Chemaly T, Bernstein IA, Fu F, Blevins NH. Automated Radiomic Analysis of Vestibular Schwannomas and Inner Ears Using Contrast-Enhanced T1-Weighted and T2-Weighted Magnetic Resonance Imaging Sequences and Artificial Intelligence. Otol Neurotol 2023; 44:e602-e609. [PMID: 37464458 DOI: 10.1097/mao.0000000000003959] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
OBJECTIVE To objectively evaluate vestibular schwannomas (VSs) and their spatial relationships with the ipsilateral inner ear (IE) in magnetic resonance imaging (MRI) using deep learning. STUDY DESIGN Cross-sectional study. PATIENTS A total of 490 adults with VS, high-resolution MRI scans, and no previous neurotologic surgery. INTERVENTIONS MRI studies of VS patients were split into training (390 patients) and test (100 patients) sets. A three-dimensional convolutional neural network model was trained to segment VS and IE structures using contrast-enhanced T1-weighted and T2-weighted sequences, respectively. Manual segmentations were used as ground truths. Model performance was evaluated on the test set and on an external set of 100 VS patients from a public data set (Vestibular-Schwannoma-SEG). MAIN OUTCOME MEASURES Dice score, relative volume error, average symmetric surface distance, 95th-percentile Hausdorff distance, and centroid locations. RESULTS Dice scores for VS and IE volume segmentations were 0.91 and 0.90, respectively. On the public data set, the model segmented VS tumors with a Dice score of 0.89 ± 0.06 (mean ± standard deviation), relative volume error of 9.8 ± 9.6%, average symmetric surface distance of 0.31 ± 0.22 mm, and 95th-percentile Hausdorff distance of 1.26 ± 0.76 mm. Predicted VS segmentations overlapped with ground truth segmentations in all test subjects. Mean errors of predicted VS volume, VS centroid location, and IE centroid location were 0.05 cm 3 , 0.52 mm, and 0.85 mm, respectively. CONCLUSIONS A deep learning system can segment VS and IE structures in high-resolution MRI scans with excellent accuracy. This technology offers promise to improve the clinical workflow for assessing VS radiomics and enhance the management of VS patients.
Collapse
Affiliation(s)
| | - George S Liu
- Department of Otolaryngology-Head and Neck Surgery, Stanford University
| | | | - Isaac A Bernstein
- Department of Otolaryngology-Head and Neck Surgery, Stanford University
| | - Fanrui Fu
- Department of Otolaryngology-Head and Neck Surgery, Stanford University
| | - Nikolas H Blevins
- Department of Otolaryngology-Head and Neck Surgery, Stanford University
| |
Collapse
|
18
|
Balossier A, Delsanti C, Troude L, Thomassin JM, Roche PH, Régis J. Assessing Tumor Volume for Sporadic Vestibular Schwannomas: A Comparison of Methods of Volumetry. Stereotact Funct Neurosurg 2023; 101:265-276. [PMID: 37531945 DOI: 10.1159/000531337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 05/16/2023] [Indexed: 08/04/2023]
Abstract
INTRODUCTION The size of vestibular schwannomas (VS) is a major factor guiding the initial decision of treatment and the definition of tumor control or failure. Accurate measurement and standardized definition are mandatory; yet no standard exist. Various approximation methods using linear measures or segmental volumetry have been reported. We reviewed different methods of volumetry and evaluated their correlation and agreement using our own historical cohort. METHODS We selected patients treated for sporadic VS by Gammaknife radiosurgery (GKRS) in our department. Using the stereotactic 3D T1 enhancing MRI on the day of GKRS, 4 methods of volumetry using linear measurements (5-axis, 3-axis, 3-axis-averaged, and 1-axis) and segmental volumetry were compared to each other. The degree of correlation was evaluated using an intraclass correlation test (ICC 3,1). The agreement between the different methods was evaluated using Bland-Altman diagrams. RESULTS A total of 2,188 patients were included. We observed an excellent ICC between 5-axis volumetry (0.98), 3-axis volumetry (0.96), and 3-axis-averaged volumetry (0.96) and segmental volumetry, respectively, irrespective of the Koos grade or Ohata classification. The ICC for 1-axis volumetry was lower (0.72) and varied depending on the Koos and Ohata subgroups. None of these methods were substitutable. CONCLUSION Although segmental volumetry is deemed the most accurate method, it takes more effort and requires sophisticated computation systems compared to methods of volumetry using linear measurements. 5-axis volumetry affords the best adequacy with segmental volumetry among all methods under assessment, irrespective of the shape of the tumor. 1-axis volumetry should not be used.
Collapse
Affiliation(s)
- Anne Balossier
- Functional and Stereotactic Neurosurgery, AP-HM, Timone Hospital, Marseille, France
- INSERM, INS, Inst Neurosci Syst, Aix Marseille University, Marseille, France
| | - Christine Delsanti
- Functional and Stereotactic Neurosurgery, AP-HM, Timone Hospital, Marseille, France
| | - Lucas Troude
- Department of Neurosurgery, AP-HM, North University Hospital, Marseille, France
| | - Jean-Marc Thomassin
- Department of Head and Neck Surgery, AP-HM, Timone Hospital, Marseille, France
| | - Pierre-Hugues Roche
- Department of Neurosurgery, AP-HM, North University Hospital, Marseille, France
| | - Jean Régis
- Functional and Stereotactic Neurosurgery, AP-HM, Timone Hospital, Marseille, France
- INSERM, INS, Inst Neurosci Syst, Aix Marseille University, Marseille, France
| |
Collapse
|
19
|
Wu J, Guo D, Wang L, Yang S, Zheng Y, Shapey J, Vercauteren T, Bisdas S, Bradford R, Saeed S, Kitchen N, Ourselin S, Zhang S, Wang G. TISS-net: Brain tumor image synthesis and segmentation using cascaded dual-task networks and error-prediction consistency. Neurocomputing 2023; 544:None. [PMID: 37528990 PMCID: PMC10243514 DOI: 10.1016/j.neucom.2023.126295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 03/15/2023] [Accepted: 04/30/2023] [Indexed: 08/03/2023]
Abstract
Accurate segmentation of brain tumors from medical images is important for diagnosis and treatment planning, and it often requires multi-modal or contrast-enhanced images. However, in practice some modalities of a patient may be absent. Synthesizing the missing modality has a potential for filling this gap and achieving high segmentation performance. Existing methods often treat the synthesis and segmentation tasks separately or consider them jointly but without effective regularization of the complex joint model, leading to limited performance. We propose a novel brain Tumor Image Synthesis and Segmentation network (TISS-Net) that obtains the synthesized target modality and segmentation of brain tumors end-to-end with high performance. First, we propose a dual-task-regularized generator that simultaneously obtains a synthesized target modality and a coarse segmentation, which leverages a tumor-aware synthesis loss with perceptibility regularization to minimize the high-level semantic domain gap between synthesized and real target modalities. Based on the synthesized image and the coarse segmentation, we further propose a dual-task segmentor that predicts a refined segmentation and error in the coarse segmentation simultaneously, where a consistency between these two predictions is introduced for regularization. Our TISS-Net was validated with two applications: synthesizing FLAIR images for whole glioma segmentation, and synthesizing contrast-enhanced T1 images for Vestibular Schwannoma segmentation. Experimental results showed that our TISS-Net largely improved the segmentation accuracy compared with direct segmentation from the available modalities, and it outperformed state-of-the-art image synthesis-based segmentation methods.
Collapse
Affiliation(s)
- Jianghao Wu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Dong Guo
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Lu Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Shuojue Yang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
| | - Jonathan Shapey
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Tom Vercauteren
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
| | - Sotirios Bisdas
- Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, London, UK
| | - Robert Bradford
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Shakeel Saeed
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Neil Kitchen
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Sebastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King’s College London, London, UK
| | - Shaoting Zhang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
- SenseTime Research, Shanghai, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| |
Collapse
|
20
|
Hu H, Xu W, Jiang T, Cheng Y, Tao X, Liu W, Jian M, Li K, Wang G. Expert-Level Immunofixation Electrophoresis Image Recognition based on Explainable and Generalizable Deep Learning. Clin Chem 2023; 69:130-139. [PMID: 36544350 DOI: 10.1093/clinchem/hvac190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 10/03/2022] [Indexed: 12/24/2022]
Abstract
BACKGROUND Immunofixation electrophoresis (IFE) is important for diagnosis of plasma cell disorders (PCDs). Manual analysis of IFE images is time-consuming and potentially subjective. An artificial intelligence (AI) system for automatic and accurate IFE image recognition is desirable. METHODS In total, 12 703 expert-annotated IFE images (9182 from a new IFE imaging system and 3521 from an old one) were used to develop and test an AI system that was an ensemble of 3 deep neural networks. The model takes an IFE image as input and predicts the presence of 8 basic patterns (IgA-, IgA-, IgG-, IgG-, IgM-, IgM-, light chain and ) and their combinations. Score-based class activation maps (Score-CAMs) were used for visual explanation of the models prediction. RESULTS The AI model achieved an average accuracy, sensitivity, and specificity of 99.82, 93.17, and 99.93, respectively, for detection of the 8 basic patterns, which outperformed 4 junior experts with 1 years experience and was comparable to a senior expert with 5 years experience. The Score-CAMs gave a reasonable visual explanation of the prediction by highlighting the target aligned regions in the bands and indicating potentially unreliable predictions. When trained with only the new system images, the models performance was still higher than junior experts on both the new and old IFE systems, with average accuracy of 99.91 and 99.81, respectively. CONCLUSIONS Our AI system achieved human-level performance in automatic recognition of IFE images, with high explainability and generalizability. It has the potential to improve the efficiency and reliability of diagnosis of PCDs.
Collapse
Affiliation(s)
- Honghua Hu
- Department of Laboratory Medicine and Sichuan Provincial Key Laboratory for Human Disease Gene Study, Sichuan Provincial Peoples Hospital, University of Electronic Science and Technology of China, Chengdu 610072, China
| | - Wei Xu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.,West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Ting Jiang
- Department of Laboratory Medicine, Tianfu New Area Peoples Hospital, Chengdu 610213, China
| | - Yuheng Cheng
- Department of Laboratory Medicine and Sichuan Provincial Key Laboratory for Human Disease Gene Study, Sichuan Provincial Peoples Hospital, University of Electronic Science and Technology of China, Chengdu 610072, China
| | - Xiaoyan Tao
- Department of Laboratory Medicine and Sichuan Provincial Key Laboratory for Human Disease Gene Study, Sichuan Provincial Peoples Hospital, University of Electronic Science and Technology of China, Chengdu 610072, China
| | - Wenna Liu
- Department of Laboratory Medicine and Sichuan Provincial Key Laboratory for Human Disease Gene Study, Sichuan Provincial Peoples Hospital, University of Electronic Science and Technology of China, Chengdu 610072, China
| | - Meiling Jian
- Department of Laboratory Medicine and Sichuan Provincial Key Laboratory for Human Disease Gene Study, Sichuan Provincial Peoples Hospital, University of Electronic Science and Technology of China, Chengdu 610072, China
| | - Kang Li
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| |
Collapse
|
21
|
Application of artificial intelligence to stereotactic radiosurgery for intracranial lesions: detection, segmentation, and outcome prediction. J Neurooncol 2023; 161:441-450. [PMID: 36635582 DOI: 10.1007/s11060-022-04234-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 12/30/2022] [Indexed: 01/14/2023]
Abstract
BACKGROUND Rapid evolution of artificial intelligence (AI) prompted its wide application in healthcare systems. Stereotactic radiosurgery served as a good candidate for AI model development and achieved encouraging result in recent years. This article aimed at demonstrating current AI application in radiosurgery. METHODS Literatures published in PubMed during 2010-2022, discussing AI application in stereotactic radiosurgery were reviewed. RESULTS AI algorithms, especially machine learning/deep learning models, have been administered to different aspect of stereotactic radiosurgery. Spontaneous tumor detection and automated lesion delineation or segmentation were two of the promising application, which could be further extended to longitudinal treatment follow-up. Outcome prediction utilized machine learning algorithms with radiomic-based analysis was another well-established application. CONCLUSIONS Stereotactic radiosurgery has taken a lead role in AI development. Current achievement, limitation, and further investigation was summarized in this article.
Collapse
|
22
|
Lee WK, Yang HC, Lee CC, Lu CF, Wu CC, Chung WY, Wu HM, Guo WY, Wu YT. Lesion delineation framework for vestibular schwannoma, meningioma and brain metastasis for gamma knife radiosurgery using stereotactic magnetic resonance images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107311. [PMID: 36577161 DOI: 10.1016/j.cmpb.2022.107311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 12/06/2022] [Accepted: 12/13/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE GKRS is an effective treatment for smaller intracranial tumors with a high control rate and low risk of complications. Target delineation in medical MR images is essential in the planning of GKRS and follow-up. A deep learning-based algorithm can effectively segment the targets from medical images and has been widely explored. However, state-of-the-art deep learning-based target delineation uses fixed sizes, and the isotropic voxel size may not be suitable for stereotactic MR images which use different anisotropic voxel sizes and numbers of slices according to the lesion size and location for clinical GKRS planning. This study developed an automatic deep learning-based segmentation scheme for stereotactic MR images. METHODS We retrospectively collected stereotactic MR images from 506 patients with VS, 1,069 patients with meningioma and 574 patients with BM who had been treated using GKRS; the lesion contours and individual T1W+C and T2W MR images were extracted from the GammaPlan system. The three-dimensional patching-based training strategy and dual-pathway architecture were used to manage inconsistent FOVs and anisotropic voxel size. Furthermore, we used two-parametric MR image as training input to segment the regions with different image characteristics (e.g., cystic lesions) effectively. RESULTS Our results for VS and BM demonstrated that the model trained using two-parametric MR images significantly outperformed the model trained using single-parametric images with median Dice coefficients (0.91, 0.05 versus 0.90, 0.06, and 0.82, 0.23 versus 0.78, 0.34, respectively), whereas predicted delineations in meningiomas using the dual-pathway model were dominated by single-parametric images (median Dice coefficients 0.83, 0.17 versus 0.84, 0.22). Finally, we combined three data sets to train the models, achieving the comparable or even higher testing median Dice (VS: 0.91, 0.07; meningioma: 0.83, 0.22; BM: 0.84, 0.23) in three diseases while using two-parametric as input. CONCLUSIONS Our proposed deep learning-based tumor segmentation scheme was successfully applied to multiple types of intracranial tumor (VS, meningioma and BM) undergoing GKRS and for segmenting the tumor effectively from stereotactic MR image volumes for use in GKRS planning.
Collapse
Affiliation(s)
- Wei-Kai Lee
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan
| | - Huai-Che Yang
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Cheng-Chia Lee
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Chia-Feng Lu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chih-Chun Wu
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wen-Yuh Chung
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan; Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Hsiu-Mei Wu
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wan-Yuo Guo
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, 155, Sec. 2, Li-Nong St. Beitou Dist., Taipei 112304, Taiwan; Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, Taiwan; Brain Research Center, National Yang Ming Chiao Tung University, Taipei, Taiwan.
| |
Collapse
|
23
|
Dorent R, Kujawa A, Ivory M, Bakas S, Rieke N, Joutard S, Glocker B, Cardoso J, Modat M, Batmanghelich K, Belkov A, Calisto MB, Choi JW, Dawant BM, Dong H, Escalera S, Fan Y, Hansen L, Heinrich MP, Joshi S, Kashtanova V, Kim HG, Kondo S, Kruse CN, Lai-Yuen SK, Li H, Liu H, Ly B, Oguz I, Shin H, Shirokikh B, Su Z, Wang G, Wu J, Xu Y, Yao K, Zhang L, Ourselin S, Shapey J, Vercauteren T. CrossMoDA 2021 challenge: Benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation. Med Image Anal 2023; 83:102628. [PMID: 36283200 PMCID: PMC10186181 DOI: 10.1016/j.media.2022.102628] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 06/17/2022] [Accepted: 09/10/2022] [Indexed: 02/04/2023]
Abstract
Domain Adaptation (DA) has recently been of strong interest in the medical imaging community. While a large variety of DA techniques have been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly addressed single-class problems. To tackle these limitations, the Cross-Modality Domain Adaptation (crossMoDA) challenge was organised in conjunction with the 24th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2021). CrossMoDA is the first large and multi-class benchmark for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the VS and the cochleas. Currently, the diagnosis and surveillance in patients with VS are commonly performed using contrast-enhanced T1 (ceT1) MR imaging. However, there is growing interest in using non-contrast imaging sequences such as high-resolution T2 (hrT2) imaging. For this reason, we established an unsupervised cross-modality segmentation benchmark. The training dataset provides annotated ceT1 scans (N=105) and unpaired non-annotated hrT2 scans (N=105). The aim was to automatically perform unilateral VS and bilateral cochlea segmentation on hrT2 scans as provided in the testing set (N=137). This problem is particularly challenging given the large intensity distribution gap across the modalities and the small volume of the structures. A total of 55 teams from 16 countries submitted predictions to the validation leaderboard. Among them, 16 teams from 9 different countries submitted their algorithm for the evaluation phase. The level of performance reached by the top-performing teams is strikingly high (best median Dice score - VS: 88.4%; Cochleas: 85.7%) and close to full supervision (median Dice score - VS: 92.5%; Cochleas: 87.7%). All top-performing methods made use of an image-to-image translation approach to transform the source-domain images into pseudo-target-domain images. A segmentation network was then trained using these generated images and the manual annotations provided for the source image.
Collapse
Affiliation(s)
- Reuben Dorent
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom.
| | - Aaron Kujawa
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Marina Ivory
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, USA; Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | | | - Samuel Joutard
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Ben Glocker
- Department of Computing, Imperial College London, Department of Computing, London, United Kingdom
| | - Jorge Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | | | - Arseniy Belkov
- Moscow Institute of Physics and Technology, Moscow, Russia
| | | | - Jae Won Choi
- Department of Radiology, Armed Forces Yangju Hospital, Yangju, Republic of Korea
| | | | - Hexin Dong
- Center for Data Science, Peking University, Beijing, China
| | - Sergio Escalera
- Artificial Intelligence in Medicine Lab (BCN-AIM) and Human Behavior Analysis Lab (HuPBA), Universitat de Barcelona, Barcelona, Spain
| | - Yubo Fan
- Vanderbilt University, Nashville, USA
| | - Lasse Hansen
- Institute of Medical Informatics, Universität zu Lübeck, Germany
| | | | - Smriti Joshi
- Artificial Intelligence in Medicine Lab (BCN-AIM) and Human Behavior Analysis Lab (HuPBA), Universitat de Barcelona, Barcelona, Spain
| | | | - Hyeon Gyu Kim
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | | | | | | | - Hao Li
- Vanderbilt University, Nashville, USA
| | - Han Liu
- Vanderbilt University, Nashville, USA
| | - Buntheng Ly
- Inria, Université Côte d'Azur, Sophia Antipolis, France
| | - Ipek Oguz
- Vanderbilt University, Nashville, USA
| | - Hyungseob Shin
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Boris Shirokikh
- Skolkovo Institute of Science and Technology, Moscow, Russia; Artificial Intelligence Research Institute (AIRI), Moscow, Russia
| | - Zixian Su
- University of Liverpool, Liverpool, United Kingdom; School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, China
| | - Guotai Wang
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Jianghao Wu
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yanwu Xu
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, USA
| | - Kai Yao
- University of Liverpool, Liverpool, United Kingdom; School of Advanced Technology, Xi'an Jiaotong-Liverpool University, Suzhou, China
| | - Li Zhang
- Center for Data Science, Peking University, Beijing, China
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| | - Jonathan Shapey
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom; Department of Neurosurgery, King's College Hospital, London, United Kingdom
| | - Tom Vercauteren
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom
| |
Collapse
|
24
|
Wilson DU, Bailey MQ, Craig J. The role of artificial intelligence in clinical imaging and workflows. Vet Radiol Ultrasound 2022; 63 Suppl 1:897-902. [PMID: 36514227 DOI: 10.1111/vru.13157] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 12/03/2021] [Accepted: 01/09/2022] [Indexed: 12/15/2022] Open
Abstract
Evidence-based medicine, outcomes management, and multidisciplinary systems are laying the foundation for radiology on the cusp of a new day. Environmental and operational forces coupled with technological advancements are redefining the veterinary radiologist of tomorrow. In the past several years, veterinary image volumes have exploded, and the scale of hardware and software required to support it seems boundless. The most dynamic trend within veterinary radiology is implementing digital information systems such as PACS, RIS, PIMS, and Voice Recognition systems. While the digitization of radiography imaging has significantly improved the workflow of the veterinary radiology assistant and radiologist, tedious, redundant tasks are abundant and mind-numbing. They can lead to errors with a significant impact on patient care. Today, these boring and repetitious tasks continue to bog down patient throughput and workflow. Artificial intelligence, particularly machine learning, shows much promise to rocket the workflow and veterinary clinical imaging into a new day where the AI management of mundane tasks allows for efficiency so the radiologist can better concentrate on the quality of patient care. In this article, we briefly discuss the major subsets of artificial intelligence (AI) workflow for the radiologist and veterinary radiology assistant including image acquisition, segmentation and mensuration, rotation and hanging protocol, detection and prioritization, monitoring and registration of lesions, implementation of these subsets, and the ethics of utilizing AI in veterinary medicine.
Collapse
Affiliation(s)
- Diane U Wilson
- Antech Imaging Services, Fountain Valley, California, USA
| | | | - John Craig
- EponaTech LLC, dba MetronMind, Paso Robles, California, USA
| |
Collapse
|
25
|
Machine Learning in the Management of Lateral Skull Base Tumors: A Systematic Review. JOURNAL OF OTORHINOLARYNGOLOGY, HEARING AND BALANCE MEDICINE 2022. [DOI: 10.3390/ohbm3040007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The application of machine learning (ML) techniques to otolaryngology remains a topic of interest and prevalence in the literature, though no previous articles have summarized the current state of ML application to management and the diagnosis of lateral skull base (LSB) tumors. Subsequently, we present a systematic overview of previous applications of ML techniques to the management of LSB tumors. Independent searches were conducted on PubMed and Web of Science between August 2020 and February 2021 to identify the literature pertaining to the use of ML techniques in LSB tumor surgery written in the English language. All articles were assessed in regard to their application task, ML methodology, and their outcomes. A total of 32 articles were examined. The number of articles involving applications of ML techniques to LSB tumor surgeries has significantly increased since the first article relevant to this field was published in 1994. The most commonly employed ML category was tree-based algorithms. Most articles were included in the category of surgical management (13; 40.6%), followed by those in disease classification (8; 25%). Overall, the application of ML techniques to the management of LSB tumor has evolved rapidly over the past two decades, and the anticipated growth in the future could significantly augment the surgical outcomes and management of LSB tumors.
Collapse
|
26
|
Liu H, Zhuang Y, Song E, Xu X, Hung CC. A bidirectional multilayer contrastive adaptation network with anatomical structure preservation for unpaired cross-modality medical image segmentation. Comput Biol Med 2022; 149:105964. [PMID: 36007288 DOI: 10.1016/j.compbiomed.2022.105964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 07/16/2022] [Accepted: 08/13/2022] [Indexed: 11/03/2022]
Abstract
Multi-modal medical image segmentation has achieved great success through supervised deep learning networks. However, because of domain shift and limited annotation information, unpaired cross-modality segmentation tasks are still challenging. The unsupervised domain adaptation (UDA) methods can alleviate the segmentation degradation of cross-modality segmentation by knowledge transfer between different domains, but current methods still suffer from the problems of model collapse, adversarial training instability, and mismatch of anatomical structures. To tackle these issues, we propose a bidirectional multilayer contrastive adaptation network (BMCAN) for unpaired cross-modality segmentation. The shared encoder is first adopted for learning modality-invariant encoding representations in image synthesis and segmentation simultaneously. Secondly, to retain the anatomical structure consistency in cross-modality image synthesis, we present a structure-constrained cross-modality image translation approach for image alignment. Thirdly, we construct a bidirectional multilayer contrastive learning approach to preserve the anatomical structures and enhance encoding representations, which utilizes two groups of domain-specific multilayer perceptron (MLP) networks to learn modality-specific features. Finally, a semantic information adversarial learning approach is designed to learn structural similarities of semantic outputs for output space alignment. Our proposed method was tested on three different cross-modality segmentation tasks: brain tissue, brain tumor, and cardiac substructure segmentation. Compared with other UDA methods, experimental results show that our proposed BMCAN achieves state-of-the-art segmentation performance on the above three tasks, and it has fewer training components and better feature representations for overcoming overfitting and domain shift problems. Our proposed method can efficiently reduce the annotation burden of radiologists in cross-modality image analysis.
Collapse
Affiliation(s)
- Hong Liu
- Center for Biomedical Imaging and Bioinformatics, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Yuzhou Zhuang
- Institute of Artificial Intelligence, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Enmin Song
- Center for Biomedical Imaging and Bioinformatics, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Xiangyang Xu
- Center for Biomedical Imaging and Bioinformatics, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Chih-Cheng Hung
- Center for Machine Vision and Security Research, Kennesaw State University, Marietta, MA, 30060, USA.
| |
Collapse
|
27
|
Ann CN, Luo N, Pandit AS. Letter: Image Segmentation in Neurosurgery: An Undervalued Skill Set? Neurosurgery 2022; 91:e31-e32. [PMID: 35471495 DOI: 10.1227/neu.0000000000002018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 03/13/2022] [Indexed: 11/19/2022] Open
Affiliation(s)
- Chu Ning Ann
- University College London, Institute of Cognitive Neuroscience, London, UK
| | - Nianhe Luo
- University College London Medical School, Bloomsbury, London, UK
| | - Anand S Pandit
- Victor Horsley Department of Neurosurgery, The National Hospital for Neurology and Neurosurgery, Queen Square, London, UK
| |
Collapse
|
28
|
Neve OM, Chen Y, Tao Q, Romeijn SR, de Boer NP, Grootjans W, Kruit MC, Lelieveldt BPF, Jansen JC, Hensen EF, Verbist BM, Staring M. Fully Automated 3D Vestibular Schwannoma Segmentation with and without Gadolinium-based Contrast Material: A Multicenter, Multivendor Study. Radiol Artif Intell 2022; 4:e210300. [PMID: 35923375 PMCID: PMC9344213 DOI: 10.1148/ryai.210300] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 05/26/2022] [Accepted: 06/03/2022] [Indexed: 05/25/2023]
Abstract
PURPOSE To develop automated vestibular schwannoma measurements on contrast-enhanced T1- and T2-weighted MRI scans. MATERIALS AND METHODS MRI data from 214 patients in 37 different centers were retrospectively analyzed between 2020 and 2021. Patients with hearing loss (134 positive for vestibular schwannoma [mean age ± SD, 54 years ± 12;64 men] and 80 negative for vestibular schwannoma) were randomly assigned to a training and validation set and to an independent test set. A convolutional neural network (CNN) was trained using fivefold cross-validation for two models (T1 and T2). Quantitative analysis, including Dice index, Hausdorff distance, surface-to-surface distance (S2S), and relative volume error, was used to compare the computer and the human delineations. An observer study was performed in which two experienced physicians evaluated both delineations. RESULTS The T1-weighted model showed state-of-the-art performance, with a mean S2S distance of less than 0.6 mm for the whole tumor and the intrameatal and extrameatal tumor parts. The whole tumor Dice index and Hausdorff distance were 0.92 and 2.1 mm in the independent test set, respectively. T2-weighted images had a mean S2S distance less than 0.6 mm for the whole tumor and the intrameatal and extrameatal tumor parts. The whole tumor Dice index and Hausdorff distance were 0.87 and 1.5 mm in the independent test set. The observer study indicated that the tool was similar to human delineations in 85%-92% of cases. CONCLUSION The CNN model detected and delineated vestibular schwannomas accurately on contrast-enhanced T1- and T2-weighted MRI scans and distinguished the clinically relevant difference between intrameatal and extrameatal tumor parts.Keywords: MRI, Ear, Nose, and Throat, Skull Base, Segmentation, Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms Supplemental material is available for this article. © RSNA, 2022.
Collapse
|
29
|
Machine Learning for the Detection and Segmentation of Benign Tumors of the Central Nervous System: A Systematic Review. Cancers (Basel) 2022; 14:cancers14112676. [PMID: 35681655 PMCID: PMC9179850 DOI: 10.3390/cancers14112676] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 05/18/2022] [Accepted: 05/26/2022] [Indexed: 11/20/2022] Open
Abstract
Simple Summary Machine learning in radiology of the central nervous system has seen many interesting publications in the past few years. Since the focus has largely been on malignant tumors such as brain metastases and high-grade gliomas, we conducted a systematic review on benign tumors to summarize what has been published and where there might be gaps in the research. We found several studies that report good results, but the descriptions of methodologies could be improved to enable better comparisons and assessment of biases. Abstract Objectives: To summarize the available literature on using machine learning (ML) for the detection and segmentation of benign tumors of the central nervous system (CNS) and to assess the adherence of published ML/diagnostic accuracy studies to best practice. Methods: The MEDLINE database was searched for the use of ML in patients with any benign tumor of the CNS, and the records were screened according to PRISMA guidelines. Results: Eleven retrospective studies focusing on meningioma (n = 4), vestibular schwannoma (n = 4), pituitary adenoma (n = 2) and spinal schwannoma (n = 1) were included. The majority of studies attempted segmentation. Links to repositories containing code were provided in two manuscripts, and no manuscripts shared imaging data. Only one study used an external test set, which raises the question as to whether some of the good performances that have been reported were caused by overfitting and may not generalize to data from other institutions. Conclusions: Using ML for detecting and segmenting benign brain tumors is still in its infancy. Stronger adherence to ML best practices could facilitate easier comparisons between studies and contribute to the development of models that are more likely to one day be used in clinical practice.
Collapse
|
30
|
Convolutional Neural Networks to Detect Vestibular Schwannomas on Single MRI Slices: A Feasibility Study. Cancers (Basel) 2022; 14:cancers14092069. [PMID: 35565199 PMCID: PMC9104481 DOI: 10.3390/cancers14092069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 03/30/2022] [Accepted: 04/19/2022] [Indexed: 02/04/2023] Open
Abstract
Simple Summary Due to the fact that they take inter-slice information into account, 3D- and 2.5D-convolutional neural networks (CNNs) potentially perform better in tumor detection tasks than 2D-CNNs. However, this potential benefit is at the expense of increased computational power and the need for segmentations as an input. Therefore, in this study we aimed to detect vestibular schwannomas (VSs) in individual magnetic resonance imaging (MRI) slices by using a 2D-CNN. We retrained (539 patients) and internally validated (94 patients) a pretrained CNN using contrast-enhanced MRI slices from one institution. Furthermore, we externally validated the CNN using contrast-enhanced MRI slices from another institution. This resulted in an accuracy of 0.949 (95% CI 0.935–0.963) and 0.912 (95% CI 0.866–0.958) for the internal and external validation, respectively. Our findings indicate that 2D-CNNs might be a promising alternative to 2.5-/3D-CNNs for certain tasks thanks to the decreased requirement for computational power and the fact that there is no need for segmentations. Abstract In this study. we aimed to detect vestibular schwannomas (VSs) in individual magnetic resonance imaging (MRI) slices by using a 2D-CNN. A pretrained CNN (ResNet-34) was retrained and internally validated using contrast-enhanced T1-weighted (T1c) MRI slices from one institution. In a second step, the model was externally validated using T1c- and T1-weighted (T1) slices from a different institution. As a substitute, bisected slices were used with and without tumors originating from whole transversal slices that contained part of the unilateral VS. The model predictions were assessed based on the categorical accuracy and confusion matrices. A total of 539, 94, and 74 patients were included for training, internal validation, and external T1c validation, respectively. This resulted in an accuracy of 0.949 (95% CI 0.935–0.963) for the internal validation and 0.912 (95% CI 0.866–0.958) for the external T1c validation. We suggest that 2D-CNNs might be a promising alternative to 2.5-/3D-CNNs for certain tasks thanks to the decreased demand for computational power and the fact that there is no need for segmentations. However, further research is needed on the difference between 2D-CNNs and more complex architectures.
Collapse
|
31
|
Kujawa A, Dorent R, Connor S, Oviedova A, Okasha M, Grishchuk D, Ourselin S, Paddick I, Kitchen N, Vercauteren T, Shapey J. Automated Koos Classification of Vestibular Schwannoma. FRONTIERS IN RADIOLOGY 2022; 2:837191. [PMID: 37492670 PMCID: PMC10365083 DOI: 10.3389/fradi.2022.837191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 02/11/2022] [Indexed: 07/27/2023]
Abstract
Objective The Koos grading scale is a frequently used classification system for vestibular schwannoma (VS) that accounts for extrameatal tumor dimension and compression of the brain stem. We propose an artificial intelligence (AI) pipeline to fully automate the segmentation and Koos classification of VS from MRI to improve clinical workflow and facilitate patient management. Methods We propose a method for Koos classification that does not only rely on available images but also on automatically generated segmentations. Artificial neural networks were trained and tested based on manual tumor segmentations and ground truth Koos grades of contrast-enhanced T1-weighted (ceT1) and high-resolution T2-weighted (hrT2) MR images from subjects with a single sporadic VS, acquired on a single scanner and with a standardized protocol. The first stage of the pipeline comprises a convolutional neural network (CNN) which can segment the VS and 7 adjacent structures. For the second stage, we propose two complementary approaches that are combined in an ensemble. The first approach applies a second CNN to the segmentation output to predict the Koos grade, the other approach extracts handcrafted features which are passed to a Random Forest classifier. The pipeline results were compared to those achieved by two neurosurgeons. Results Eligible patients (n = 308) were pseudo-randomly split into 5 groups to evaluate the model performance with 5-fold cross-validation. The weighted macro-averaged mean absolute error (MA-MAE), weighted macro-averaged F1 score (F1), and accuracy score of the ensemble model were assessed on the testing sets as follows: MA-MAE = 0.11 ± 0.05, F1 = 89.3 ± 3.0%, accuracy = 89.3 ± 2.9%, which was comparable to the average performance of two neurosurgeons: MA-MAE = 0.11 ± 0.08, F1 = 89.1 ± 5.2, accuracy = 88.6 ± 5.8%. Inter-rater reliability was assessed by calculating Fleiss' generalized kappa (k = 0.68) based on all 308 cases, and intra-rater reliabilities of annotator 1 (k = 0.95) and annotator 2 (k = 0.82) were calculated according to the weighted kappa metric with quadratic (Fleiss-Cohen) weights based on 15 randomly selected cases. Conclusions We developed the first AI framework to automatically classify VS according to the Koos scale. The excellent results show that the accuracy of the framework is comparable to that of neurosurgeons and may therefore facilitate management of patients with VS. The models, code, and ground truth Koos grades for a subset of publicly available images (n = 188) will be released upon publication.
Collapse
Affiliation(s)
- Aaron Kujawa
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Reuben Dorent
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Steve Connor
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Department of Neuroradiology, King's College Hospital, London, United Kingdom
- Department of Radiology, Guy's Hospital, London, United Kingdom
| | - Anna Oviedova
- Department of Neurosurgery, King's College Hospital, London, United Kingdom
| | - Mohamed Okasha
- Department of Neurosurgery, King's College Hospital, London, United Kingdom
| | - Diana Grishchuk
- Queen Square Radiosurgery Centre (Gamma Knife), National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Sebastien Ourselin
- Wellcome/Engineering and Physical Sciences Research Council (EPSRC) Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Ian Paddick
- Queen Square Radiosurgery Centre (Gamma Knife), National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Neil Kitchen
- Queen Square Radiosurgery Centre (Gamma Knife), National Hospital for Neurology and Neurosurgery, London, United Kingdom
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Jonathan Shapey
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Department of Neurosurgery, King's College Hospital, London, United Kingdom
| |
Collapse
|
32
|
Automated objective surgical planning for lateral skull base tumors. Int J Comput Assist Radiol Surg 2022; 17:427-436. [PMID: 35089486 DOI: 10.1007/s11548-022-02564-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 01/10/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Surgical removal of pathology at the lateral skull base is challenging because of the proximity of critical anatomical structures which can lead to significant morbidity when damaged or traversed. Pre-operative computed surgical approach planning has the potential to aid in selection of the optimal approach to remove pathology and minimize complications. METHODS We propose an automated surgical approach planning algorithm to derive the optimal approach to vestibular schwannomas in the internal auditory canal for hearing preservation surgery. The algorithm selects between the middle cranial fossa and retrosigmoid approach by utilizing a unique segmentation of each patient's anatomy and a cost function to minimize potential surgical morbidity. RESULTS Patients who underwent hearing preservation surgery for vestibular schwannoma resection (n = 9) were included in the cohort. Middle cranial fossa surgery was performed in 5 patients, and retrosigmoid surgery was performed in 4. The algorithm favored the performed surgical approach in 6 of 9 patients. CONCLUSION We developed a method for computing morbidity costs of surgical paths to objectively analyze surgical approaches at the lateral skull base. Computed pre-operative planning may assist in surgical decision making, trainee education, and improving clinical outcomes.
Collapse
|
33
|
Shapey J, Kujawa A, Dorent R, Wang G, Dimitriadis A, Grishchuk D, Paddick I, Kitchen N, Bradford R, Saeed SR, Bisdas S, Ourselin S, Vercauteren T. Segmentation of vestibular schwannoma from MRI, an open annotated dataset and baseline algorithm. Sci Data 2021; 8:286. [PMID: 34711849 PMCID: PMC8553833 DOI: 10.1038/s41597-021-01064-w] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 09/08/2021] [Indexed: 11/08/2022] Open
Abstract
Automatic segmentation of vestibular schwannomas (VS) from magnetic resonance imaging (MRI) could significantly improve clinical workflow and assist patient management. We have previously developed a novel artificial intelligence framework based on a 2.5D convolutional neural network achieving excellent results equivalent to those achieved by an independent human annotator. Here, we provide the first publicly-available annotated imaging dataset of VS by releasing the data and annotations used in our prior work. This collection contains a labelled dataset of 484 MR images collected on 242 consecutive patients with a VS undergoing Gamma Knife Stereotactic Radiosurgery at a single institution. Data includes all segmentations and contours used in treatment planning and details of the administered dose. Implementation of our automated segmentation algorithm uses MONAI, a freely-available open-source framework for deep learning in healthcare imaging. These data will facilitate the development and validation of automated segmentation frameworks for VS and may also be used to develop other multi-modal algorithmic models.
Collapse
Affiliation(s)
- Jonathan Shapey
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom.
- Department of Neurosurgery, King's College Hospital, London, United Kingdom.
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom.
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, United Kingdom.
| | - Aaron Kujawa
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Reuben Dorent
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Guotai Wang
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Alexis Dimitriadis
- Queen Square Radiosurgery Centre (Gamma Knife), National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Diana Grishchuk
- Queen Square Radiosurgery Centre (Gamma Knife), National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Ian Paddick
- Queen Square Radiosurgery Centre (Gamma Knife), National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Neil Kitchen
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, United Kingdom
- Queen Square Radiosurgery Centre (Gamma Knife), National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Robert Bradford
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, United Kingdom
- Queen Square Radiosurgery Centre (Gamma Knife), National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Shakeel R Saeed
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, United Kingdom
- The Ear Institute, University College London, London, United Kingdom
- The Royal National Throat, Nose and Ear Hospital, London, United Kingdom
| | - Sotirios Bisdas
- Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, London, United Kingdom
| | - Sébastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| |
Collapse
|
34
|
Sager P, Näf L, Vu E, Fischer T, Putora PM, Ehret F, Fürweger C, Schröder C, Förster R, Zwahlen DR, Muacevic A, Windisch P. Convolutional Neural Networks for Classifying Laterality of Vestibular Schwannomas on Single MRI Slices-A Feasibility Study. Diagnostics (Basel) 2021; 11:1676. [PMID: 34574017 PMCID: PMC8465488 DOI: 10.3390/diagnostics11091676] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 09/04/2021] [Accepted: 09/07/2021] [Indexed: 11/16/2022] Open
Abstract
Introduction: Many proposed algorithms for tumor detection rely on 2.5/3D convolutional neural networks (CNNs) and the input of segmentations for training. The purpose of this study is therefore to assess the performance of tumor detection on single MRI slices containing vestibular schwannomas (VS) as a computationally inexpensive alternative that does not require the creation of segmentations. Methods: A total of 2992 T1-weighted contrast-enhanced axial slices containing VS from the MRIs of 633 patients were labeled according to tumor location, of which 2538 slices from 539 patients were used for training a CNN (ResNet-34) to classify them according to the side of the tumor as a surrogate for detection and 454 slices from 94 patients were used for internal validation. The model was then externally validated on contrast-enhanced and non-contrast-enhanced slices from a different institution. Categorical accuracy was noted, and the results of the predictions for the validation set are provided with confusion matrices. Results: The model achieved an accuracy of 0.928 (95% CI: 0.869-0.987) on contrast-enhanced slices and 0.795 (95% CI: 0.702-0.888) on non-contrast-enhanced slices from the external validation cohorts. The implementation of Gradient-weighted Class Activation Mapping (Grad-CAM) revealed that the focus of the model was not limited to the contrast-enhancing tumor but to a larger area of the cerebellum and the cerebellopontine angle. Conclusions: Single-slice predictions might constitute a computationally inexpensive alternative to training 2.5/3D-CNNs for certain detection tasks in medical imaging even without the use of segmentations. Head-to-head comparisons between 2D and more sophisticated architectures could help to determine the difference in accuracy, especially for more difficult tasks.
Collapse
Affiliation(s)
- Philipp Sager
- Department of Radiation Oncology, Kantonsspital Winterthur, 8400 Winterthur, Switzerland; (P.S.); (C.S.); (R.F.); (D.R.Z.)
| | - Lukas Näf
- Department of Radiology, Kantonsspital St. Gallen, 9007 St. Gallen, Switzerland; (L.N.); (T.F.)
| | - Erwin Vu
- Department of Radiation Oncology, Kantonsspital St. Gallen, 9007 St. Gallen, Switzerland; (E.V.); (P.M.P.)
| | - Tim Fischer
- Department of Radiology, Kantonsspital St. Gallen, 9007 St. Gallen, Switzerland; (L.N.); (T.F.)
| | - Paul M. Putora
- Department of Radiation Oncology, Kantonsspital St. Gallen, 9007 St. Gallen, Switzerland; (E.V.); (P.M.P.)
- Department of Radiation Oncology, University of Bern, 3010 Bern, Switzerland
| | - Felix Ehret
- Charité—Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiation Oncology, 13353 Berlin, Germany;
- European Cyberknife Center, 81377 Munich, Germany; (C.F.); (A.M.)
| | - Christoph Fürweger
- European Cyberknife Center, 81377 Munich, Germany; (C.F.); (A.M.)
- Department of Stereotaxy and Functional Neurosurgery, University of Cologne, Faculty of Medicine and University Hospital Cologne, 50937 Cologne, Germany
| | - Christina Schröder
- Department of Radiation Oncology, Kantonsspital Winterthur, 8400 Winterthur, Switzerland; (P.S.); (C.S.); (R.F.); (D.R.Z.)
| | - Robert Förster
- Department of Radiation Oncology, Kantonsspital Winterthur, 8400 Winterthur, Switzerland; (P.S.); (C.S.); (R.F.); (D.R.Z.)
- Faculty of Medicine, University of Zurich, 8006 Zurich, Switzerland
| | - Daniel R. Zwahlen
- Department of Radiation Oncology, Kantonsspital Winterthur, 8400 Winterthur, Switzerland; (P.S.); (C.S.); (R.F.); (D.R.Z.)
- Faculty of Medicine, University of Zurich, 8006 Zurich, Switzerland
| | | | - Paul Windisch
- Department of Radiation Oncology, Kantonsspital Winterthur, 8400 Winterthur, Switzerland; (P.S.); (C.S.); (R.F.); (D.R.Z.)
- European Cyberknife Center, 81377 Munich, Germany; (C.F.); (A.M.)
| |
Collapse
|
35
|
Detection of Vestibular Schwannoma on Triple-parametric Magnetic Resonance Images Using Convolutional Neural Networks. J Med Biol Eng 2021. [DOI: 10.1007/s40846-021-00638-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Abstract
Purpose
The first step in typical treatment of vestibular schwannoma (VS) is to localize the tumor region, which is time-consuming and subjective because it relies on repeatedly reviewing different parametric magnetic resonance (MR) images. A reliable, automatic VS detection method can streamline the process.
Methods
A convolutional neural network architecture, namely YOLO-v2 with a residual network as a backbone, was used to detect VS tumors from MR images. To heighten performance, T1-weighted–contrast-enhanced, T2-weighted, and T1-weighted images were combined into triple-channel images for feature learning. The triple-channel images were cropped into three sizes to serve as input images of YOLO-v2. The VS detection effectiveness levels were evaluated for two backbone residual networks that downsampled the inputs by 16 and 32.
Results
The results demonstrated the VS detection capability of YOLO-v2 with a residual network as a backbone model. The average precision was 0.7953 for a model with 416 × 416-pixel input images and 16 instances of downsampling, when both the thresholds of confidence score and intersection-over-union were set to 0.5. In addition, under an appropriate threshold of confidence score, a high average precision, namely 0.8171, was attained by using a model with 448 × 448-pixel input images and 16 instances of downsampling.
Conclusion
We demonstrated successful VS tumor detection by using a YOLO-v2 with a residual network as a backbone model on resized triple-parametric MR images. The results indicated the influence of image size, downsampling strategy, and confidence score threshold on VS tumor detection.
Collapse
|
36
|
Huang D, Bai H, Wang L, Hou Y, Li L, Xia Y, Yan Z, Chen W, Chang L, Li W. The Application and Development of Deep Learning in Radiotherapy: A Systematic Review. Technol Cancer Res Treat 2021; 20:15330338211016386. [PMID: 34142614 PMCID: PMC8216350 DOI: 10.1177/15330338211016386] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023] Open
Abstract
With the massive use of computers, the growth and explosion of data has greatly promoted the development of artificial intelligence (AI). The rise of deep learning (DL) algorithms, such as convolutional neural networks (CNN), has provided radiation oncologists with many promising tools that can simplify the complex radiotherapy process in the clinical work of radiation oncology, improve the accuracy and objectivity of diagnosis, and reduce the workload, thus enabling clinicians to spend more time on advanced decision-making tasks. As the development of DL gets closer to clinical practice, radiation oncologists will need to be more familiar with its principles to properly evaluate and use this powerful tool. In this paper, we explain the development and basic concepts of AI and discuss its application in radiation oncology based on different task categories of DL algorithms. This work clarifies the possibility of further development of DL in radiation oncology.
Collapse
Affiliation(s)
- Danju Huang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Han Bai
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Li Wang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Yu Hou
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Lan Li
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Yaoxiong Xia
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Zhirui Yan
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Wenrui Chen
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Li Chang
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| | - Wenhui Li
- Department of Radiation Oncology, 531840The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Kunming, Yunnan, China
| |
Collapse
|
37
|
Lee CC, Lee WK, Wu CC, Lu CF, Yang HC, Chen YW, Chung WY, Hu YS, Wu HM, Wu YT, Guo WY. Applying artificial intelligence to longitudinal imaging analysis of vestibular schwannoma following radiosurgery. Sci Rep 2021; 11:3106. [PMID: 33542422 PMCID: PMC7862268 DOI: 10.1038/s41598-021-82665-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2020] [Accepted: 01/18/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) has been applied with considerable success in the fields of radiology, pathology, and neurosurgery. It is expected that AI will soon be used to optimize strategies for the clinical management of patients based on intensive imaging follow-up. Our objective in this study was to establish an algorithm by which to automate the volumetric measurement of vestibular schwannoma (VS) using a series of parametric MR images following radiosurgery. Based on a sample of 861 consecutive patients who underwent Gamma Knife radiosurgery (GKRS) between 1993 and 2008, the proposed end-to-end deep-learning scheme with automated pre-processing pipeline was applied to a series of 1290 MR examinations (T1W+C, and T2W parametric MR images). All of which were performed under consistent imaging acquisition protocols. The relative volume difference (RVD) between AI-based volumetric measurements and clinical measurements performed by expert radiologists were + 1.74%, - 0.31%, - 0.44%, - 0.19%, - 0.01%, and + 0.26% at each follow-up time point, regardless of the state of the tumor (progressed, pseudo-progressed, or regressed). This study outlines an approach to the evaluation of treatment responses via novel volumetric measurement algorithm, and can be used longitudinally following GKRS for VS. The proposed deep learning AI scheme is applicable to longitudinal follow-up assessments following a variety of therapeutic interventions.
Collapse
Affiliation(s)
- Cheng-Chia Lee
- School of Medicine, National Yang-Ming University, Taipei, Taiwan
- Department of Neurosurgery, Neurological Institute, Taipei Veteran General Hospital, Taipei, Taiwan
- Brain Research Center, National Yang-Ming University, Taipei, Taiwan
| | - Wei-Kai Lee
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan
| | - Chih-Chun Wu
- Department of Radiology, Taipei Veteran General Hospital, Taipei, Taiwan
- School of Medicine, National Yang-Ming University, Taipei, Taiwan
| | - Chia-Feng Lu
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan
| | - Huai-Che Yang
- School of Medicine, National Yang-Ming University, Taipei, Taiwan
- Department of Neurosurgery, Neurological Institute, Taipei Veteran General Hospital, Taipei, Taiwan
| | - Yu-Wei Chen
- Department of Neurosurgery, Neurological Institute, Taipei Veteran General Hospital, Taipei, Taiwan
| | - Wen-Yuh Chung
- School of Medicine, National Yang-Ming University, Taipei, Taiwan
- Department of Neurosurgery, Neurological Institute, Taipei Veteran General Hospital, Taipei, Taiwan
| | - Yong-Sin Hu
- Department of Radiology, Taipei Veteran General Hospital, Taipei, Taiwan
- School of Medicine, National Yang-Ming University, Taipei, Taiwan
| | - Hsiu-Mei Wu
- Department of Radiology, Taipei Veteran General Hospital, Taipei, Taiwan
- School of Medicine, National Yang-Ming University, Taipei, Taiwan
| | - Yu-Te Wu
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan.
- Institute of Biophotonics, National Yang-Ming University, Taipei, Taiwan.
- Brain Research Center, National Yang-Ming University, Taipei, Taiwan.
| | - Wan-Yuo Guo
- Department of Radiology, Taipei Veteran General Hospital, Taipei, Taiwan.
- School of Medicine, National Yang-Ming University, Taipei, Taiwan.
| |
Collapse
|
38
|
Application of deep learning for automatic segmentation of brain tumors on magnetic resonance imaging: a heuristic approach in the clinical scenario. Neuroradiology 2021; 63:1253-1262. [PMID: 33501512 DOI: 10.1007/s00234-021-02649-3] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 01/14/2021] [Indexed: 01/23/2023]
Abstract
PURPOSE Accurate brain tumor segmentation on magnetic resonance imaging (MRI) has wide-ranging applications such as radiosurgery planning. Advances in artificial intelligence, especially deep learning (DL), allow development of automatic segmentation that overcome the labor-intensive and operator-dependent manual segmentation. We aimed to evaluate the accuracy of the top-performing DL model from the 2018 Brain Tumor Segmentation (BraTS) challenge, the impact of missing MRI sequences, and whether a model trained on gliomas can accurately segment other brain tumor types. METHODS We trained the model using Medical Decathlon dataset, applied it to the BraTS 2019 glioma dataset, and developed additional models using individual and multimodal MRI sequences. The Dice score was calculated to assess the model's accuracy compared to ground truth labels by neuroradiologists on BraTS dataset. The model was then applied to a local dataset of 105 brain tumors, performance of which was qualitatively evaluated. RESULTS The DL model using pre- and post-gadolinium contrast T1 and T2 FLAIR sequences performed best, with a Dice score 0.878 for whole tumor, 0.732 tumor core, and 0.699 active tumor. Lack of T1 or T2 sequences did not significantly degrade performance, but FLAIR and T1C were important contributors. All segmentations performed by the model in the local dataset, including non-glioma cases, were considered accurate by a pool of specialists. CONCLUSION The DL model could use available MRI sequences to optimize glioma segmentation and adopt transfer learning to segment non-glioma tumors, thereby serving as a useful tool to improve treatment planning and personalized surveillance of patients.
Collapse
|
39
|
George-Jones NA, Chkheidze R, Moore S, Wang J, Hunter JB. MRI Texture Features are Associated with Vestibular Schwannoma Histology. Laryngoscope 2020; 131:E2000-E2006. [PMID: 33300608 DOI: 10.1002/lary.29309] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Revised: 11/17/2020] [Accepted: 11/29/2020] [Indexed: 01/06/2023]
Abstract
OBJECTIVES/HYPOTHESIS To determine if commonly used radiomics features have an association with histological findings in vestibular schwannomas (VS). STUDY DESIGN Retrospective case-series. METHODS Patients were selected from an internal database of those who had a gadolinium-enhanced T1-weighted MRI scan captured prior to surgical resection of VS. Texture features from the presurgical magnetic resonance image (MRI) were extracted, and pathologists examined the resected tumors to assess for the presence of mucin, lymphocytes, necrosis, and hemosiderin and used a validated computational tool to determine cellularity. Sensitivity, specificity, and positive likelihood ratios were also computed for selected features using the Youden index to determine the optimal cut-off value. RESULTS A total of 45 patients were included. We found significant associations between multiple MRI texture features and the presence of mucin, lymphocytes, hemosiderin, and cellularity. No significant associations between MRI texture features and necrosis were identified. We were able to identify significant positive likelihood ratios using Youden index cut-off values for mucin (2.3; 95% CI 1.2-4.3), hemosiderin (1.5; 95% CI 1.04-2.1), lymphocytes (3.8; 95% CI 1.2-11.7), and necrosis (1.5; 95% CI 1.1-2.2). CONCLUSIONS MRI texture features are associated with underlying histology in VS. LEVEL OF EVIDENCE 3 Laryngoscope, 131:E2000-E2006, 2021.
Collapse
Affiliation(s)
- Nicholas A George-Jones
- Department of Otolaryngology-Head and Neck Surgery, University of Texas Southwestern Medical Center, Dallas, Texas, U.S.A
| | - Rati Chkheidze
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, Texas, U.S.A
| | - Samantha Moore
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, Texas, U.S.A
| | - Jing Wang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, U.S.A
| | - Jacob B Hunter
- Department of Otolaryngology-Head and Neck Surgery, University of Texas Southwestern Medical Center, Dallas, Texas, U.S.A
| |
Collapse
|
40
|
Ren Y, Tawfik KO, Mastrodimos BJ, Cueva RA. Preoperative Radiographic Predictors of Hearing Preservation After Retrosigmoid Resection of Vestibular Schwannomas. Otolaryngol Head Neck Surg 2020; 165:344-353. [PMID: 33290167 DOI: 10.1177/0194599820978246] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
OBJECTIVE To identify preoperative radiographic predictors of hearing preservation (HP) after retrosigmoid resection of vestibular schwannomas (VSs). STUDY DESIGN Retrospective case series with chart review. SETTING Tertiary skull base referral center. METHODS Adult patients with VSs <3 cm and word recognition scores (WRSs) ≥50% who underwent retrosigmoid resection and attempted HP between February 2008 and December 2018 were identified. Pure tone average (PTA), WRS, and magnetic resonance imaging radiographic data, including tumor diameter and dimensional extension relative to the internal auditory canal (IAC), were examined. RESULTS A total of 151 patients were included. The average tumor size was 13.8 mm (range, 3-28). Hearing was preserved in 41.7% (n = 63). HP rates were higher for intracanalicular tumors than tumors with cerebellopontine angle (CPA) components (57.6% vs 29.4%, P = .03). On multivariate analysis, maximal tumor diameter (odds ratio [OR], 0.892; P < .001) and preoperative PTA (OR, 0.974; P = .026) predicted HP, while mediolateral tumor diameter predicted postoperative PTA (OR, 1.21; P = .005) and WRS (OR, -1.89; P < .001). For tumors extending into the CPA, younger age (OR, 0.913; P = .012), better preoperative PTA (OR, 0.935; P = .049), smaller posterior tumor extension (OR, 0.862; P = .001), and smaller caudal extension relative to the IAC (OR, 0.844; P = .001) all predicted HP. CONCLUSION Rates of HP are highest in patients with small intracanalicular VSs and good preoperative hearing. For tumors extending into the CPA, greater posterior and caudal tumor extension relative to the IAC may portend worse hearing outcomes.
Collapse
Affiliation(s)
- Yin Ren
- Division of Otolaryngology-Head and Neck Surgery, Department of Surgery, School of Medicine, University of California-San Diego, San Diego, California, USA
| | - Kareem O Tawfik
- Division of Otolaryngology-Head and Neck Surgery, Department of Surgery, School of Medicine, University of California-San Diego, San Diego, California, USA.,Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Bill J Mastrodimos
- Department of Neurosurgery, Kaiser Permanente Southern California Group, San Diego, California, USA
| | - Roberto A Cueva
- Department of Head and Neck Surgery, Kaiser Permanente Southern California Group, San Diego, California, USA
| |
Collapse
|
41
|
Wan Y, Rahmat R, Price SJ. Deep learning for glioblastoma segmentation using preoperative magnetic resonance imaging identifies volumetric features associated with survival. Acta Neurochir (Wien) 2020; 162:3067-3080. [PMID: 32662042 PMCID: PMC7593295 DOI: 10.1007/s00701-020-04483-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 07/02/2020] [Indexed: 12/21/2022]
Abstract
BACKGROUND Measurement of volumetric features is challenging in glioblastoma. We investigate whether volumetric features derived from preoperative MRI using a convolutional neural network-assisted segmentation is correlated with survival. METHODS Preoperative MRI of 120 patients were scored using Visually Accessible Rembrandt Images (VASARI) features. We trained and tested a multilayer, multi-scale convolutional neural network on multimodal brain tumour segmentation challenge (BRATS) data, prior to testing on our dataset. The automated labels were manually edited to generate ground truth segmentations. Network performance for our data and BRATS data was compared. Multivariable Cox regression analysis corrected for multiple testing using the false discovery rate was performed to correlate clinical and imaging variables with overall survival. RESULTS Median Dice coefficients in our sample were (1) whole tumour 0.94 (IQR, 0.82-0.98) compared to 0.91 (IQR, 0.83-0.94 p = 0.012), (2) FLAIR region 0.84 (IQR, 0.63-0.95) compared to 0.81 (IQR, 0.69-0.8 p = 0.170), (3) contrast-enhancing region 0.91 (IQR, 0.74-0.98) compared to 0.83 (IQR, 0.78-0.89 p = 0.003) and (4) necrosis region were 0.82 (IQR, 0.47-0.97) compared to 0.67 (IQR, 0.42-0.81 p = 0.005). Contrast-enhancing region/tumour core ratio (HR 4.73 [95% CI, 1.67-13.40], corrected p = 0.017) and necrotic core/tumour core ratio (HR 8.13 [95% CI, 2.06-32.12], corrected p = 0.011) were independently associated with overall survival. CONCLUSION Semi-automated segmentation of glioblastoma using a convolutional neural network trained on independent data is robust when applied to routine clinical data. The segmented volumes have prognostic significance.
Collapse
|
42
|
Chen L. Editorial for: "Primary Central Nervous System Lymphoma: Clinical Evaluation of Automated Segmentation on Multiparametric MRI Using Deep Learning". J Magn Reson Imaging 2020; 53:269-270. [PMID: 32770563 DOI: 10.1002/jmri.27312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2020] [Accepted: 07/20/2020] [Indexed: 11/08/2022] Open
Affiliation(s)
- Luguang Chen
- Department of Radiology, Changhai Hospital of Shanghai, The Second Military Medical University, No.168 Changhai Road, Shanghai, 200433, China
| |
Collapse
|
43
|
McGrath H, Li P, Dorent R, Bradford R, Saeed S, Bisdas S, Ourselin S, Shapey J, Vercauteren T. Manual segmentation versus semi-automated segmentation for quantifying vestibular schwannoma volume on MRI. Int J Comput Assist Radiol Surg 2020; 15:1445-1455. [PMID: 32676869 PMCID: PMC7419453 DOI: 10.1007/s11548-020-02222-y] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 06/20/2020] [Indexed: 12/21/2022]
Abstract
Purpose Management of vestibular schwannoma (VS) is based on tumour size as observed on T1 MRI scans with contrast agent injection. The current clinical practice is to measure the diameter of the tumour in its largest dimension. It has been shown that volumetric measurement is more accurate and more reliable as a measure of VS size. The reference approach to achieve such volumetry is to manually segment the tumour, which is a time intensive task. We suggest that semi-automated segmentation may be a clinically applicable solution to this problem and that it could replace linear measurements as the clinical standard. Methods Using high-quality software available for academic purposes, we ran a comparative study of manual versus semi-automated segmentation of VS on MRI with 5 clinicians and scientists. We gathered both quantitative and qualitative data to compare the two approaches; including segmentation time, segmentation effort and segmentation accuracy. Results We found that the selected semi-automated segmentation approach is significantly faster (167 s vs 479 s, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$p<0.001$$\end{document}p<0.001), less temporally and physically demanding and has approximately equal performance when compared with manual segmentation, with some improvements in accuracy. There were some limitations, including algorithmic unpredictability and error, which produced more frustration and increased mental effort in comparison with manual segmentation. Conclusion We suggest that semi-automated segmentation could be applied clinically for volumetric measurement of VS on MRI. In future, the generic software could be refined for use specifically for VS segmentation, thereby improving accuracy. Electronic supplementary material The online version of this article (10.1007/s11548-020-02222-y) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Hari McGrath
- GKT School of Medical Education, King's College London, London, UK.
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.
| | - Peichao Li
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Reuben Dorent
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Robert Bradford
- Queen Square Radiosurgery Centre (Gamma Knife), National Hospital for Neurology and Neurosurgery, London, UK
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
| | - Shakeel Saeed
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
- The Ear Institute, UCL, London, UK
- The Royal National Throat Nose and Ear Hospital, London, UK
| | - Sotirios Bisdas
- Neuroradiology Department, National Hospital for Neurology and Neurosurgery, London, UK
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Jonathan Shapey
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, UCL, London, UK
| | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| |
Collapse
|
44
|
Lee WK, Wu CC, Lee CC, Lu CF, Yang HC, Huang TH, Lin CY, Chung WY, Wang PS, Wu HM, Guo WY, Wu YT. Combining analysis of multi-parametric MR images into a convolutional neural network: Precise target delineation for vestibular schwannoma treatment planning. Artif Intell Med 2020; 107:101911. [PMID: 32828450 DOI: 10.1016/j.artmed.2020.101911] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Revised: 04/22/2020] [Accepted: 06/09/2020] [Indexed: 11/30/2022]
Abstract
Manual delineation of vestibular schwannoma (VS) by magnetic resonance (MR) imaging is required for diagnosis, radiosurgery dose planning, and follow-up tumor volume measurement. A rapid and objective automatic segmentation method is required, but problems have been encountered due to the low through-plane resolution of standard VS MR scan protocols and because some patients have non-homogeneous cystic areas within their tumors. In this study, we retrospectively collected multi-parametric MR images from 516 patients with VS; these were extracted from the Gamma Knife radiosurgery planning system and consisted of T1-weighted (T1W), T2-weighted (T2W), and T1W with contrast (T1W + C) images. We developed an end-to-end deep-learning-based method via an automatic preprocessing pipeline. A two-pathway U-Net model involving two sizes of convolution kernel (i.e., 3 × 3 × 1 and 1 × 1 × 3) was used to extract the in-plane and through-plane features of the anisotropic MR images. A single-pathway model that adopted the same architecture as the two-pathway model, but used a kernel size of 3 × 3 × 3, was also developed for comparison purposes. In addition, we used multi-parametric MR images with different image contrasts as the model training input in order to effectively segment tumors with solid as well as cystic parts. The results of the automatic segmentation demonstrated that (1) the two-pathway model outperformed single-pathway model in terms of dice scores (0.90 ± 0.05 versus 0.87 ± 0.07); both of them having been trained using the T1W, T1W + C and T2W anisotropic MR images, (2) the optimal single-parametric two-pathway model (dice score: 0.88 ± 0.06) was then trained using the T1W + C images, and (3) the two-pathway models trained using bi-parametric (T1W + C and T2W) and tri-parametric (T1W, T2W, and T1W + C) images outperformed the model trained using the single-parametric (T1W + C) images (dice scores: 0.89 ± 0.05 and 0.90 ± 0.05, respectively, larger than 0.88 ± 0.06) because it showed improved segmentation of the non-homogeneous parts of the tumors. The proposed two-pathway U-Net model outperformed the single-pathway U-Net model when segmenting VS using anisotropic MR images. The multi-parametric models effectively improved on the defective segmentation obtained using the single-parametric models by separating the non-homogeneous tumors into their solid and cystic parts.
Collapse
Affiliation(s)
- Wei-Kai Lee
- National Yang-Ming University, Department of Biomedical Imaging and Radiological Sciences, Taipei, Taiwan
| | - Chih-Chun Wu
- Taipei Veteran General Hospital, Department of Radiology, Taiwan; School of Medicine, National Yang-Ming University, Taipei, Taiwan
| | - Cheng-Chia Lee
- School of Medicine, National Yang-Ming University, Taipei, Taiwan; Taipei Veteran General Hospital, Department of Neurosurgery, Taiwan
| | - Chia-Feng Lu
- National Yang-Ming University, Department of Biomedical Imaging and Radiological Sciences, Taipei, Taiwan
| | - Huai-Che Yang
- School of Medicine, National Yang-Ming University, Taipei, Taiwan; Taipei Veteran General Hospital, Department of Neurosurgery, Taiwan
| | - Tzu-Hsuan Huang
- National Yang-Ming University, Institute of Biophotonics, Taipei, Taiwan
| | - Chun-Yi Lin
- National Yang-Ming University, Institute of Biophotonics, Taipei, Taiwan
| | - Wen-Yuh Chung
- School of Medicine, National Yang-Ming University, Taipei, Taiwan; Taipei Veteran General Hospital, Department of Neurosurgery, Taiwan
| | - Po-Shan Wang
- School of Medicine, National Yang-Ming University, Taipei, Taiwan; National Yang-Ming University, Institute of Biophotonics, Taipei, Taiwan; Municipal Gan-Dau Hospital, Taipei, Taiwan; Brain Research Center, National Yang-Ming University, Taipei, Taiwan
| | - Hsiu-Mei Wu
- Taipei Veteran General Hospital, Department of Radiology, Taiwan; School of Medicine, National Yang-Ming University, Taipei, Taiwan
| | - Wan-Yuo Guo
- Taipei Veteran General Hospital, Department of Radiology, Taiwan; School of Medicine, National Yang-Ming University, Taipei, Taiwan.
| | - Yu-Te Wu
- National Yang-Ming University, Department of Biomedical Imaging and Radiological Sciences, Taipei, Taiwan; National Yang-Ming University, Institute of Biophotonics, Taipei, Taiwan; Brain Research Center, National Yang-Ming University, Taipei, Taiwan.
| |
Collapse
|
45
|
George-Jones NA, Wang K, Wang J, Hunter JB. Automated Detection of Vestibular Schwannoma Growth Using a Two-Dimensional U-Net Convolutional Neural Network. Laryngoscope 2020; 131:E619-E624. [PMID: 32304338 DOI: 10.1002/lary.28695] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 03/20/2020] [Accepted: 03/31/2020] [Indexed: 12/27/2022]
Abstract
OBJECTIVES/HYPOTHESIS To determine if an automated vestibular schwannoma (VS) segmentation model has comparable performance to using the greatest linear dimension to detect growth. STUDY DESIGN Case-control Study. METHODS Patients were selected from an internal database who had an initial gadolinium-enhanced T1-weighted magnetic resonance imaging scan and a follow-up scan captured at least 5 months later. Two observers manually segmented the VS to compute volumes, and one observer's segmentations were used to train a convolutional neural network model to automatically segment the VS and determine the volume. The results of automatic segmentation were compared to the observer whose measurements were not used in model development to measure agreement. We then examined the sensitivity, specificity, and area under the receiver-operating characteristic curve (AUC) to compare automated volumetric growth detection versus using the greatest linear dimension. Growth detection determined by the external observer's measurements served as the gold standard. RESULTS A total of 65 patients and 130 scans were studied. The automated method of segmentation demonstrated excellent agreement with the observer whose measurements were not used for model development for the initial scan (interclass correlational coefficient [ICC] = 0.995; 95% confidence interval [CI]: 0.991-0.997) and follow-up scan (ICC = 0.960; 95% CI: 0.935-0.975). The automated method of segmentation demonstrated increased sensitivity (72.2% vs. 63.9%), specificity (79.3% vs. 69.0%), and AUC (0.822 vs. 0.701) compared to using the greatest linear dimension for growth detection. CONCLUSIONS In detecting VS growth, a convolutional neural network model outperformed using the greatest linear dimension, demonstrating a potential application of artificial intelligence methods to VS surveillance. LEVEL OF EVIDENCE 4 Laryngoscope, 131:E619-E624, 2021.
Collapse
Affiliation(s)
- Nicholas A George-Jones
- Department of Otolaryngology-Head and Neck Surgery, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Kai Wang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Jing Wang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Jacob B Hunter
- Department of Otolaryngology-Head and Neck Surgery, University of Texas Southwestern Medical Center, Dallas, Texas
| |
Collapse
|