1
|
Mu S, Lu W, Yu G, Zheng L, Qiu J. Deep learning-based grading of white matter hyperintensities enables identification of potential markers in multi-sequence MRI data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107904. [PMID: 37924768 DOI: 10.1016/j.cmpb.2023.107904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 10/06/2023] [Accepted: 10/27/2023] [Indexed: 11/06/2023]
Abstract
BACKGROUND White matter hyperintensities (WMHs) are widely-seen in the aging population, which are associated with cerebrovascular risk factors and age-related cognitive decline. At present, structural atrophy and functional alterations coexisted with WMHs lacks comprehensive investigation. This study developed a WMHs risk prediction model to evaluate WHMs according to Fazekas scales, and to locate potential regions with high risks across the entire brain. METHODS We developed a WMHs risk prediction model, which consisted of the following steps: T2 fluid attenuated inversion recovery (T2-FLAIR) image of each participant was firstly segmented into 1000 tiles with the size of 32 × 32 × 1, features from the tiles were extracted using the ResNet18-based feature extractor, and then a 1D convolutional neural network (CNN) was used to score all tiles based on the extracted features. Finally, a multi-layer perceptron (MLP) was constructed to predict the Fazekas scales based on the tile scores. The proposed model was trained using T2-FLAIR images, we selected tiles with abnormal scores in the test set after prediction, and evaluated their corresponding gray matter (GM) volume, white matter (WM) volume, fractional anisotropy (FA), mean diffusivity (MD), and cerebral blood flow (CBF) via longitudinal and multi-sequence Magnetic Resonance Imaging (MRI) data analysis. RESULTS The proposed WMHs risk prediction model could accurately predict the Fazekas ratings based on the tile scores from T2-FLAIR MRI images with accuracy of 0.656, 0.621 in training data set and test set, respectively. The longitudinal MRI validation revealed that most of the high-risk tiles predicted by the WMHs risk prediction model in the baseline images had WMHs in the corresponding positions in the longitudinal images. The validation on multi-sequence MRI demonstrated that WMHs were associated with GM and WM atrophies, WM micro-structural and perfusion alterations in high-risk tiles, and multi-modal MRI measures of most high-risk tiles showed significant associations with Mini Mental State Examination (MMSE) score. CONCLUSION Our proposed WMHs risk prediction model can not only accurately evaluate WMH severities according to Fazekas scales, but can also uncover potential markers of WMHs across modalities. The WMHs risk prediction model has the potential to be used for the early detection of WMH-related alterations in the entire brain and WMH-induced cognitive decline.
Collapse
Affiliation(s)
- Si Mu
- College of Mechanical and Electronic Engineering, Shandong Agricultural University, Tai'an, Shandong, 271000, China
| | - Weizhao Lu
- Department of Radiology, the Second Affiliated Hospital of Shandong First Medical University, Tai'an, Shandong, 271000, China
| | - Guanghui Yu
- Department of Radiology, the Second Affiliated Hospital of Shandong First Medical University, Tai'an, Shandong, 271000, China
| | - Lei Zheng
- Department of Radiology, Rushan Hospital of Chinese Medicine, Rushan, Shandong, 264500, China.
| | - Jianfeng Qiu
- School of Radiology, Shandong First Medical University & Shandong Academy of Medicine Sciences, Tai'an, Shandong, 271000, China; Science and Technology Innovation Center, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan, 250000, China.
| |
Collapse
|
2
|
Alsaidi FA, Moria KM. Flatfeet Severity-Level Detection Based on Alignment Measuring. SENSORS (BASEL, SWITZERLAND) 2023; 23:8219. [PMID: 37837049 PMCID: PMC10574869 DOI: 10.3390/s23198219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 09/17/2023] [Accepted: 09/25/2023] [Indexed: 10/15/2023]
Abstract
Flat foot is a postural deformity in which the plantar part of the foot is either completely or partially contacted with the ground. In recent clinical practices, X-ray radiographs have been introduced to detect flat feet because they are more affordable to many clinics than using specialized devices. This research aims to develop an automated model that detects flat foot cases and their severity levels from lateral foot X-ray images by measuring three different foot angles: the Arch Angle, Meary's Angle, and the Calcaneal Inclination Angle. Since these angles are formed by connecting a set of points on the image, Template Matching is used to allocate a set of potential points for each angle, and then a classifier is used to select the points with the highest predicted likelihood to be the correct point. Inspired by literature, this research constructed and compared two models: a Convolutional Neural Network-based model and a Random Forest-based model. These models were trained on 8000 images and tested on 240 unseen cases. As a result, the highest overall accuracy rate was 93.13% achieved by the Random Forest model, with mean values for all foot types (normal foot, mild flat foot, and moderate flat foot) being: 93.38 precision, 92.56 recall, 96.46 specificity, 95.42 accuracy, and 92.90 F-Score. The main conclusions that were deduced from this research are: (1) Using transfer learning (VGG-16) as a feature-extractor-only, in addition to image augmentation, has greatly increased the overall accuracy rate. (2) Relying on three different foot angles shows more accurate estimations than measuring a single foot angle.
Collapse
Affiliation(s)
- Fatmah A. Alsaidi
- Department of Computer Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Kawthar M. Moria
- Department of Electrical and Computer Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| |
Collapse
|
3
|
Spagnolo F, Depeursinge A, Schädelin S, Akbulut A, Müller H, Barakovic M, Melie-Garcia L, Bach Cuadra M, Granziera C. How far MS lesion detection and segmentation are integrated into the clinical workflow? A systematic review. Neuroimage Clin 2023; 39:103491. [PMID: 37659189 PMCID: PMC10480555 DOI: 10.1016/j.nicl.2023.103491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 08/04/2023] [Indexed: 09/04/2023]
Abstract
INTRODUCTION Over the past few years, the deep learning community has developed and validated a plethora of tools for lesion detection and segmentation in Multiple Sclerosis (MS). However, there is an important gap between validating models technically and clinically. To this end, a six-step framework necessary for the development, validation, and integration of quantitative tools in the clinic was recently proposed under the name of the Quantitative Neuroradiology Initiative (QNI). AIMS Investigate to what extent automatic tools in MS fulfill the QNI framework necessary to integrate automated detection and segmentation into the clinical neuroradiology workflow. METHODS Adopting the systematic Cochrane literature review methodology, we screened and summarised published scientific articles that perform automatic MS lesions detection and segmentation. We categorised the retrieved studies based on their degree of fulfillment of QNI's six-steps, which include a tool's technical assessment, clinical validation, and integration. RESULTS We found 156 studies; 146/156 (94%) fullfilled the first QNI step, 155/156 (99%) the second, 8/156 (5%) the third, 3/156 (2%) the fourth, 5/156 (3%) the fifth and only one the sixth. CONCLUSIONS To date, little has been done to evaluate the clinical performance and the integration in the clinical workflow of available methods for MS lesion detection/segmentation. In addition, the socio-economic effects and the impact on patients' management of such tools remain almost unexplored.
Collapse
Affiliation(s)
- Federico Spagnolo
- Translational Imaging in Neurology (ThINK) Basel, Department of Biomedical Engineering, Faculty of Medicine, University Hospital Basel and University of Basel, Basel, Switzerland; Department of Neurology, University Hospital Basel, Basel, Switzerland; Research Center for Clinical Neuroimmunology and Neuroscience Basel (RC2NB), University Hospital Basel and University of Basel, Basel, Switzerland; MedGIFT, Institute of Informatics, School of Management, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland
| | - Adrien Depeursinge
- MedGIFT, Institute of Informatics, School of Management, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland; Nuclear Medicine and Molecular Imaging Department, Lausanne University Hospital (CHUV) and University of Lausanne, Lausanne, Switzerland
| | - Sabine Schädelin
- Translational Imaging in Neurology (ThINK) Basel, Department of Biomedical Engineering, Faculty of Medicine, University Hospital Basel and University of Basel, Basel, Switzerland; Clinical Trial Unit, Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Aysenur Akbulut
- Translational Imaging in Neurology (ThINK) Basel, Department of Biomedical Engineering, Faculty of Medicine, University Hospital Basel and University of Basel, Basel, Switzerland; Ankara University School of Medicine, Ankara, Turkey
| | - Henning Müller
- MedGIFT, Institute of Informatics, School of Management, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland; The Sense Research and Innovation Center, Lausanne and Sion, Switzerland
| | - Muhamed Barakovic
- Translational Imaging in Neurology (ThINK) Basel, Department of Biomedical Engineering, Faculty of Medicine, University Hospital Basel and University of Basel, Basel, Switzerland; Department of Neurology, University Hospital Basel, Basel, Switzerland; Research Center for Clinical Neuroimmunology and Neuroscience Basel (RC2NB), University Hospital Basel and University of Basel, Basel, Switzerland
| | - Lester Melie-Garcia
- Translational Imaging in Neurology (ThINK) Basel, Department of Biomedical Engineering, Faculty of Medicine, University Hospital Basel and University of Basel, Basel, Switzerland; Department of Neurology, University Hospital Basel, Basel, Switzerland; Research Center for Clinical Neuroimmunology and Neuroscience Basel (RC2NB), University Hospital Basel and University of Basel, Basel, Switzerland
| | - Meritxell Bach Cuadra
- CIBM Center for Biomedical Imaging, Lausanne, Switzerland; Radiology Department, Lausanne University Hospital (CHUV) and University of Lausanne, Lausanne, Switzerland
| | - Cristina Granziera
- Translational Imaging in Neurology (ThINK) Basel, Department of Biomedical Engineering, Faculty of Medicine, University Hospital Basel and University of Basel, Basel, Switzerland; Department of Neurology, University Hospital Basel, Basel, Switzerland; Research Center for Clinical Neuroimmunology and Neuroscience Basel (RC2NB), University Hospital Basel and University of Basel, Basel, Switzerland.
| |
Collapse
|
4
|
Valls-Conesa J, Winterauer DJ, Kröger-Lui N, Roth S, Liu F, Lüttjohann S, Harig R, Vollertsen J. Random forest microplastic classification using spectral subsamples of FT-IR hyperspectral images. ANALYTICAL METHODS : ADVANCING METHODS AND APPLICATIONS 2023; 15:2226-2233. [PMID: 37114762 DOI: 10.1039/d3ay00514c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
In this work, a random decision forest model is built for fast identification of Fourier-transform infrared spectra of the eleven most common types of microplastics in the environment. The random decision forest input data is reduced to a combination of highly discriminative single wavenumbers selected using a machine learning classifier. This dimension reduction allows input from systems with individual wavenumber measurements, and decreases prediction time. The training and testing spectra are extracted from Fourier-transform infrared hyperspectral images of pure-type microplastic samples, automatizing the process with reference spectra and a fast background correction and identification algorithm. Random decision forest classification results are validated using procedurally generated ground truth. The classification accuracy achieved on said ground truths are not expected to carry over to environmental samples as those usually contain a broader variety of materials.
Collapse
Affiliation(s)
- Jordi Valls-Conesa
- Bruker Optics GmbH & Co. KG, Rudolf-Plank-Str. 27, 76275 Ettlingen, Germany.
- Department of the Built Environment, Aalborg University, Thomas Manns Vej 23, 9220 Aalborg, Denmark
| | | | - Niels Kröger-Lui
- Bruker Optics GmbH & Co. KG, Rudolf-Plank-Str. 27, 76275 Ettlingen, Germany.
| | - Sascha Roth
- Bruker Optics GmbH & Co. KG, Rudolf-Plank-Str. 27, 76275 Ettlingen, Germany.
| | - Fan Liu
- Department of the Built Environment, Aalborg University, Thomas Manns Vej 23, 9220 Aalborg, Denmark
| | - Stephan Lüttjohann
- Bruker Optics GmbH & Co. KG, Rudolf-Plank-Str. 27, 76275 Ettlingen, Germany.
| | - Roland Harig
- Bruker Optics GmbH & Co. KG, Rudolf-Plank-Str. 27, 76275 Ettlingen, Germany.
| | - Jes Vollertsen
- Department of the Built Environment, Aalborg University, Thomas Manns Vej 23, 9220 Aalborg, Denmark
| |
Collapse
|
5
|
Li Z, Fang J, Qiu R, Gong H, Zhang W, Li L, Jiang J. CDA-Net: A contrastive deep adversarial model for prostate cancer segmentation in MRI images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
6
|
Balaha HM, Hassan AES. A variate brain tumor segmentation, optimization, and recognition framework. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10337-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
7
|
Li X, Jiang Y, Li M, Zhang J, Yin S, Luo H. MSFR-Net: Multi-modality and single-modality feature recalibration network for brain tumor segmentation. Med Phys 2022; 50:2249-2262. [PMID: 35962724 DOI: 10.1002/mp.15933] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 05/16/2022] [Accepted: 06/14/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Accurate and automated brain tumor segmentation from multi-modality MR images plays a significant role in tumor treatment. However, the existing approaches mainly focus on the fusion of multi-modality while ignoring the correlation between single-modality and tumor sub-components. For example, T2-weighted images show good visualization of edema, and T1-contrast images have a good contrast between enhancing tumor core and necrosis. In the actual clinical process, professional physicians also label tumors according to these characteristics. We design a method for brain tumors segmentation that utilizes both multi-modality fusion and single-modality characteristics. METHODS A multi-modality and single-modality feature recalibration network (MSFR-Net) is proposed for brain tumor segmentation from MR images. Specifically, multi-modality information and single-modality information are assigned to independent pathways. Multi-modality network explicitly learn the relationship between all modalities and all tumor sub-components. Single-modality network learn the relationship between single-modality and its highly correlated tumor sub-components. Then, a dual recalibration module (DRM) is designed to connect the parallel single-modality network and multi-modality network at multiple stages. The function of the DRM is to unify the two types of features into the same feature space. RESULTS Experiments on BraTS 2015 dataset and BraTS 2018 dataset show that the proposed method is competitive and superior to other state-of-the-art methods. The proposed method achieved the segmentation results with dice coefficients of 0.86 and hausdorff distance of 4.82 on BraTS 2018 dataset, with dice coefficients of 0.80, positive predictive value of 0.76 and sensitivity of 0.78 on BraTS 2015 dataset. CONCLUSIONS This work combines the manual labeling process of doctors and introduces the correlation between single-modality and the tumor sub-components into the segmentation network. The method improves the segmentation performance of brain tumors and can be applied in the clinical practice. The code of the proposed method is available at: https://github.com/xiangQAQ/MSFR-Net. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Yuchen Jiang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Minglei Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Jiusi Zhang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Shen Yin
- Department of Mechanical and Industrial Engineering, Faculty of Engineering, Norwegian University of Science and Technology, Trondheim, 7034, Norway
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| |
Collapse
|
8
|
Sadeghibakhi M, Pourreza H, Mahyar H. Multiple Sclerosis Lesions Segmentation Using Attention-Based CNNs in FLAIR Images. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 10:1800411. [PMID: 35711337 PMCID: PMC9191687 DOI: 10.1109/jtehm.2022.3172025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 03/05/2022] [Accepted: 04/08/2022] [Indexed: 11/17/2022]
Abstract
Objective: Multiple Sclerosis (MS) is an autoimmune and demyelinating disease that leads to lesions in the central nervous system. This disease can be tracked and diagnosed using Magnetic Resonance Imaging (MRI). A multitude of multimodality automatic biomedical approaches are used to segment lesions that are not beneficial for patients in terms of cost, time, and usability. The authors of the present paper propose a method employing just one modality (FLAIR image) to segment MS lesions accurately. Methods: A patch-based Convolutional Neural Network (CNN) is designed, inspired by 3D-ResNet and spatial-channel attention module, to segment MS lesions. The proposed method consists of three stages: (1) the Contrast-Limited Adaptive Histogram Equalization (CLAHE) is applied to the original images and concatenated to the extracted edges to create 4D images; (2) the patches of size [Formula: see text] are randomly selected from the 4D images; and (3) the extracted patches are passed into an attention-based CNN which is used to segment the lesions. Finally, the proposed method was compared to previous studies of the same dataset. Results: The current study evaluates the model with a test set of ISIB challenge data. Experimental results illustrate that the proposed approach significantly surpasses existing methods of Dice similarity and Absolute Volume Difference while the proposed method uses just one modality (FLAIR) to segment the lesions. Conclusion: The authors have introduced an automated approach to segment the lesions, which is based on, at most, two modalities as an input. The proposed architecture comprises convolution, deconvolution, and an SCA-VoxRes module as an attention module. The results show, that the proposed method outperforms well compared to other methods.
Collapse
Affiliation(s)
- Mehdi Sadeghibakhi
- MV LaboratoryDepartment of Computer Engineering, Faculty of EngineeringFerdowsi University of MashhadMashhad9177948974Iran
| | - Hamidreza Pourreza
- MV LaboratoryDepartment of Computer Engineering, Faculty of EngineeringFerdowsi University of MashhadMashhad9177948974Iran
| | - Hamidreza Mahyar
- Faculty of Engineering, W Booth School of Engineering Practice and TechnologyMcMaster UniversityHamiltonONL8S 4L8Canada
| |
Collapse
|
9
|
Ong K, Young DM, Sulaiman S, Shamsuddin SM, Mohd Zain NR, Hashim H, Yuen K, Sanders SJ, Yu W, Hang S. Detection of subtle white matter lesions in MRI through texture feature extraction and boundary delineation using an embedded clustering strategy. Sci Rep 2022; 12:4433. [PMID: 35292654 PMCID: PMC8924181 DOI: 10.1038/s41598-022-07843-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 02/24/2022] [Indexed: 11/29/2022] Open
Abstract
White matter lesions (WML) underlie multiple brain disorders, and automatic WML segmentation is crucial to evaluate the natural disease course and effectiveness of clinical interventions, including drug discovery. Although recent research has achieved tremendous progress in WML segmentation, accurate detection of subtle WML present early in the disease course remains particularly challenging. Here we propose an approach to automatic WML segmentation of mild WML loads using an intensity standardisation technique, gray level co-occurrence matrix (GLCM) embedded clustering technique, and random forest (RF) classifier to extract texture features and identify morphology specific to true WML. We precisely define their boundaries through a local outlier factor (LOF) algorithm that identifies edge pixels by local density deviation relative to its neighbors. The automated approach was validated on 32 human subjects, demonstrating strong agreement and correlation (excluding one outlier) with manual delineation by a neuroradiologist through Intra-Class Correlation (ICC = 0.881, 95% CI 0.769, 0.941) and Pearson correlation (r = 0.895, p-value < 0.001), respectively, and outperforming three leading algorithms (Trimmed Mean Outlier Detection, Lesion Prediction Algorithm, and SALEM-LS) in five of the six established key metrics defined in the MICCAI Grand Challenge. By facilitating more accurate segmentation of subtle WML, this approach may enable earlier diagnosis and intervention.
Collapse
Affiliation(s)
- Kokhaur Ong
- Bioinformatics Institute, A*STAR, Singapore, Singapore.,Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
| | - David M Young
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore.,Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, USA
| | - Sarina Sulaiman
- School of Computing, Faculty of Engineering, Universiti Teknologi Malaysia, Johor, Malaysia
| | | | | | - Hilwati Hashim
- Department of Radiology, Faculty of Medicine, Universiti Teknologi MARA, Sungai Buloh, Malaysia
| | - Kahhay Yuen
- School of Pharmaceutical Sciences, Universiti Sains Malaysia, Penang, Malaysia
| | - Stephan J Sanders
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, USA
| | - Weimiao Yu
- Bioinformatics Institute, A*STAR, Singapore, Singapore. .,Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore. .,Computational Digital Pathology Laboratory, Bioinformatics Institute (BII), 30 Biopolis Street, #07-46 Matrix, Singapore, 138671, Singapore.
| | - Seepheng Hang
- Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, UTM Skudai, 81310, Johor, Malaysia.
| |
Collapse
|
10
|
Tran P, Thoprakarn U, Gourieux E, Dos Santos CL, Cavedo E, Guizard N, Cotton F, Krolak-Salmon P, Delmaire C, Heidelberg D, Pyatigorskaya N, Ströer S, Dormont D, Martini JB, Chupin M. Automatic segmentation of white matter hyperintensities: validation and comparison with state-of-the-art methods on both Multiple Sclerosis and elderly subjects. Neuroimage Clin 2022; 33:102940. [PMID: 35051744 PMCID: PMC8896108 DOI: 10.1016/j.nicl.2022.102940] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 12/15/2021] [Accepted: 01/06/2022] [Indexed: 11/27/2022]
Abstract
Automatic segmentation of MS lesions and age-related WMH from 3D T1 and T2-FLAIR. Comparison to consensus show improved performance of WHASA-3D compared to WHASA. WHASA-3D outperforms available state-of-the-art methods with their default settings. WHASA-3D could be a useful tool for clinical practice and clinical trials.
Different types of white matter hyperintensities (WMH) can be observed through MRI in the brain and spinal cord, especially Multiple Sclerosis (MS) lesions for patients suffering from MS and age-related WMH for subjects with cognitive disorders and/or elderly people. To better diagnose and monitor the disease progression, the quantitative evaluation of WMH load has proven to be useful for clinical routine and trials. Since manual delineation for WMH segmentation is highly time-consuming and suffers from intra and inter observer variability, several methods have been proposed to automatically segment either MS lesions or age-related WMH, but none is validated on both WMH types. Here, we aim at proposing the White matter Hyperintensities Automatic Segmentation Algorithm adapted to 3D T2-FLAIR datasets (WHASA-3D), a fast and robust automatic segmentation tool designed to be implemented in clinical practice for the detection of both MS lesions and age-related WMH in the brain, using both 3D T1-weighted and T2-FLAIR images. In order to increase its robustness for MS lesions, WHASA-3D expands the original WHASA method, which relies on the coupling of non-linear diffusion framework and watershed parcellation, where regions considered as WMH are selected based on intensity and location characteristics, and finally refined with geodesic dilation. The previous validation was performed on 2D T2-FLAIR and subjects with cognitive disorders and elderly subjects. 60 subjects from a heterogeneous database of dementia patients, multiple sclerosis patients and elderly subjects with multiple MRI scanners and a wide range of lesion loads were used to evaluate WHASA and WHASA-3D through volume and spatial agreement in comparison with consensus reference segmentations. In addition, a direct comparison on the MS database with six available supervised and unsupervised state-of-the-art WMH segmentation methods (LST-LGA and LPA, Lesion-TOADS, lesionBrain, BIANCA and nicMSlesions) with default and optimised settings (when feasible) was conducted. WHASA-3D confirmed an improved performance with respect to WHASA, achieving a better spatial overlap (Dice) (0.67 vs 0.63), a reduced absolute volume error (AVE) (3.11 vs 6.2 mL) and an increased volume agreement (intraclass correlation coefficient, ICC) (0.96 vs 0.78). Compared to available state-of-the-art algorithms on the MS database, WHASA-3D outperformed both unsupervised and supervised methods when used with their default settings, showing the highest volume agreement (ICC = 0.95) as well as the highest average Dice (0.58). Optimising and/or retraining LST-LGA, BIANCA and nicMSlesions, using a subset of the MS database as training set, resulted in improved performances on the remaining testing set (average Dice: LST-LGA default/optimized = 0.41/0.51, BIANCA default/optimized = 0.22/0.39, nicMSlesions default/optimized = 0.17/0.63, WHASA-3D = 0.58). Evaluation and comparison results suggest that WHASA-3D is a reliable and easy-to-use method for the automated segmentation of white matter hyperintensities, for both MS lesions and age-related WMH. Further validation on larger datasets would be useful to confirm these first findings.
Collapse
Affiliation(s)
- Philippe Tran
- Qynapse, Paris, France; Equipe-projet ARAMIS, ICM, CNRS UMR 7225, Inserm U1117, Sorbonne Université UMR_S 1127, Centre Inria de Paris, Groupe Hospitalier Pitié-Salpêtrière Charles Foix, Faculté de Médecine Sorbonne Université, Paris, France.
| | | | - Emmanuelle Gourieux
- CATI, ICM, CNRS UMR 7225, Inserm U1117, Sorbonne Université UMR_S 1127, Paris, France; NeuroSpin, CEA, Saclay, France
| | | | | | | | - François Cotton
- Service de Radiologie, Centre Hospitalier Lyon-Sud, Hospices Civils de Lyon, Pierre-Bénite, France; Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, F-69495, Pierre-Bénite, France
| | - Pierre Krolak-Salmon
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1206, F-69495, Pierre-Bénite, France; Clinical and Research Memory Centre of Lyon, Hospices Civils de Lyon, Lyon, France; INSERM, U1028, UMR CNRS 5292, Lyon Neuroscience Research Center, Lyon, France
| | | | - Damien Heidelberg
- Service de Radiologie, Centre Hospitalier Lyon-Sud, Hospices Civils de Lyon, Pierre-Bénite, France
| | - Nadya Pyatigorskaya
- Department of Neuroradiology, Groupe Hospitalier Pitié-Salpêtrière, AP-HP, Sorbonne Université UMR_S 1127, Paris, France
| | - Sébastian Ströer
- Department of Neuroradiology, Groupe Hospitalier Pitié-Salpêtrière, AP-HP, Sorbonne Université UMR_S 1127, Paris, France
| | - Didier Dormont
- Equipe-projet ARAMIS, ICM, CNRS UMR 7225, Inserm U1117, Sorbonne Université UMR_S 1127, Centre Inria de Paris, Groupe Hospitalier Pitié-Salpêtrière Charles Foix, Faculté de Médecine Sorbonne Université, Paris, France; Department of Neuroradiology, Groupe Hospitalier Pitié-Salpêtrière, AP-HP, Sorbonne Université UMR_S 1127, Paris, France
| | | | - Marie Chupin
- CATI, ICM, CNRS UMR 7225, Inserm U1117, Sorbonne Université UMR_S 1127, Paris, France
| | | |
Collapse
|
11
|
Ma Y, Zhang C, Cabezas M, Song Y, Tang Z, Liu D, Cai W, Barnett M, Wang C. Multiple Sclerosis Lesion Analysis in Brain Magnetic Resonance Images: Techniques and Clinical Applications. IEEE J Biomed Health Inform 2022; 26:2680-2692. [PMID: 35171783 DOI: 10.1109/jbhi.2022.3151741] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Multiple sclerosis (MS) is a chronic inflammatory and degenerative disease of the central nervous system, characterized by the appearance of focal lesions in the white and gray matter that topographically correlate with an individual patients neurological symptoms and signs. Magnetic resonance imaging (MRI) provides detailed in-vivo structural information, permitting the quantification and categorization of MS lesions that critically inform disease management. Traditionally, MS lesions have been manually annotated on 2D MRI slices, a process that is inefficient and prone to inter-/intra-observer errors. Recently, automated statistical imaging analysis techniques have been proposed to detect and segment MS lesions based on MRI voxel intensity. However, their effectiveness is limited by the heterogeneity of both MRI data acquisition techniques and the appearance of MS lesions. By learning complex lesion representations directly from images, deep learning techniques have achieved remarkable breakthroughs in the MS lesion segmentation task. Here, we provide a comprehensive review of state-of-the-art automatic statistical and deep-learning MS segmentation methods and discuss current and future clinical applications. Further, we review technical strategies, such as domain adaptation, to enhance MS lesion segmentation in real-world clinical settings.
Collapse
|
12
|
A Review on Atrial Fibrillation (Computer Simulation and Clinical Perspectives). HEARTS 2022. [DOI: 10.3390/hearts3010005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Atrial fibrillation (AF), a heart condition, has been a well-researched topic for the past few decades. This multidisciplinary field of study deals with signal processing, finite element analysis, mathematical modeling, optimization, and clinical procedure. This article is focused on a comprehensive review of journal articles published in the field of AF. Topics from the age-old fundamental concepts to specialized modern techniques involved in today’s AF research are discussed. It was found that a lot of research articles have already been published in modeling and simulation of AF. In comparison to that, the diagnosis and post-operative procedures for AF patients have not yet been totally understood or explored by the researchers. The simulation and modeling of AF have been investigated by many researchers in this field. Cellular model, tissue model, and geometric model among others have been used to simulate AF. Due to a very complex nature, the causes of AF have not been fully perceived to date, but the simulated results are validated with real-life patient data. Many algorithms have been proposed to detect the source of AF in human atria. There are many ablation strategies for AF patients, but the search for more efficient ablation strategies is still going on. AF management for patients with different stages of AF has been discussed in the literature as well but is somehow limited mostly to the patients with persistent AF. The authors hope that this study helps to find existing research gaps in the analysis and the diagnosis of AF.
Collapse
|
13
|
Nalepa J. AIM and Brain Tumors. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
14
|
Lapuyade-Lahorgue J, Ruan S. Segmentation of multicorrelated images with copula models and conditionally random fields. J Med Imaging (Bellingham) 2022; 9:014001. [PMID: 35024379 PMCID: PMC8741411 DOI: 10.1117/1.jmi.9.1.014001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 12/16/2021] [Indexed: 01/11/2023] Open
Abstract
Purpose: Multisource images are interesting in medical imaging. Indeed, multisource images enable the use of complementary information of different sources such as for T1 and T2 modalities in MRI imaging. However, such multisource data can also be subject to redundancy and correlation. The question is how to efficiently fuse the multisource information without reinforcing the redundancy. We propose a method for segmenting multisource images that are statistically correlated. Approach: The method that we propose is the continuation of a prior work in which we introduce the copula model in hidden Markov fields (HMF). To achieve the multisource segmentations, we use a functional measure of dependency called "copula." This copula is incorporated in the conditionally random fields (CRF). Contrary to HMF, where we consider a prior knowledge on the hidden states modeled by an HMF, in CRF, there is no prior information and only the distribution of the hidden states conditionally to the observations can be known. This conditional distribution depends on the data and can be modeled by an energy function composed of two terms. The first one groups the voxels having similar intensities in the same class. As for the second term, it encourages a pair of voxels to be in the same class if the difference between their intensities is not too big. Results: A comparison between HMF and CRF is performed via theory and experimentations using both simulated and real data from BRATS 2013. Moreover, our method is compared with different state-of-the-art methods, which include supervised (convolutional neural networks) and unsupervised (hierarchical MRF). Our unsupervised method gives similar results as decision trees for synthetic images and as convolutional neural networks for real images; both methods are supervised. Conclusions: We compare two statistical methods using the copula: HMF and CRF to deal with multicorrelated images. We demonstrate the interest of using copula. In both models, the copula considerably improves the results compared with individual segmentations.
Collapse
Affiliation(s)
- Jérôme Lapuyade-Lahorgue
- University of Rouen, LITIS, Eq. Quantif, Rouen, France,Address all correspondence to Jérôme Lapuyade-Lahorgue,
| | - Su Ruan
- University of Rouen, LITIS, Eq. Quantif, Rouen, France
| |
Collapse
|
15
|
Kaur A, Kaur L, Singh A. GA-UNet: UNet-based framework for segmentation of 2D and 3D medical images applicable on heterogeneous datasets. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06134-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
16
|
Krishna Priya R, Chacko S. Improved particle swarm optimized deep convolutional neural network with super-pixel clustering for multiple sclerosis lesion segmentation in brain MRI imaging. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2021; 37:e3506. [PMID: 34181310 DOI: 10.1002/cnm.3506] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 02/09/2021] [Accepted: 03/12/2021] [Indexed: 06/13/2023]
Abstract
A central nervous system (CNS) disease affecting the insulating myelin sheaths around the brain axons is called multiple sclerosis (MS). In today's world, MS is extensively diagnosed and monitored using the MRI, because of the structural MRI sensitivity in dissemination of white matter lesions with respect to space and time. The main aim of this study is to propose Multiple Sclerosis Lesion Segmentation in Brain MRI imaging using Optimized Deep Convolutional Neural Network and Super-pixel Clustering. Three stages included in the proposed methodology are: (a) preprocessing, (b) segmentation of super-pixel, and (c) classification of super-pixel. In the first stage, image enhancement and skull stripping is done through performing a preprocessing step. In the second stage, the MS lesion and Non-MS lesion regions are segmented through applying SLICO algorithm over each slice of the volume. In the fourth stage, a CNN training and classification is performed using this segmented lesion and non-lesion regions. To handle this complex task, a newly developed Improved Particle Swarm Optimization (IPSO) based optimized convolutional neural network classifier is applied. On clinical MS data, the approach exhibits a significant increase in the accuracy segmenting of WM lesions when compared with the rest of evaluated methods.
Collapse
Affiliation(s)
- R Krishna Priya
- Department of Electrical and Communication Engineering, National University of Science and Technology, Oman
| | - Susamma Chacko
- Department of Quality Enhancement and Assurance, National University of Science and Technology, Oman
| |
Collapse
|
17
|
Gaj S, Ontaneda D, Nakamura K. Automatic segmentation of gadolinium-enhancing lesions in multiple sclerosis using deep learning from clinical MRI. PLoS One 2021; 16:e0255939. [PMID: 34469432 PMCID: PMC8409666 DOI: 10.1371/journal.pone.0255939] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 07/27/2021] [Indexed: 01/18/2023] Open
Abstract
Gadolinium-enhancing lesions reflect active disease and are critical for in-patient monitoring in multiple sclerosis (MS). In this work, we have developed the first fully automated method to segment and count the gadolinium-enhancing lesions from routine clinical MRI of MS patients. The proposed method first segments the potential lesions using 2D-UNet from multi-channel scans (T1 post-contrast, T1 pre-contrast, FLAIR, T2, and proton-density) and classifies the lesions using a random forest classifier. The algorithm was trained and validated on 600 MRIs with manual segmentation. We compared the effect of loss functions (Dice, cross entropy, and bootstrapping cross entropy) and number of input contrasts. We compared the lesion counts with those by radiologists using 2,846 images. Dice, lesion-wise sensitivity, and false discovery rate with full 5 contrasts were 0.698, 0.844, and 0.307, which improved to 0.767, 0.969, and 0.00 in large lesions (>100 voxels). The model using bootstrapping loss function provided a statistically significant increase of 7.1% in sensitivity and of 2.3% in Dice compared with the model using cross entropy loss. T1 post/pre-contrast and FLAIR were the most important contrasts. For large lesions, the 2D-UNet model trained using T1 pre-contrast, FLAIR, T2, PD had a lesion-wise sensitivity of 0.688 and false discovery rate 0.083, even without T1 post-contrast. For counting lesions in 2846 routine MRI images, the model with 2D-UNet and random forest, which was trained with bootstrapping cross entropy, achieved accuracy of 87.7% using T1 pre-contrast, T1 post-contrast, and FLAIR when lesion counts were categorized as 0, 1, and 2 or more. The model performs well in routine non-standardized MRI datasets, allows large-scale analysis of clinical datasets, and may have clinical applications.
Collapse
Affiliation(s)
- Sibaji Gaj
- Department of Biomedical Engineering, Cleveland Clinic, Cleveland, Ohio, United States of America
| | - Daniel Ontaneda
- Mellen Center for Multiple Sclerosis, Cleveland Clinic, Cleveland, Ohio, United States of America
| | - Kunio Nakamura
- Department of Biomedical Engineering, Cleveland Clinic, Cleveland, Ohio, United States of America
| |
Collapse
|
18
|
Homayoun H, Ebrahimpour-komleh H. Automated Segmentation of Abnormal Tissues in Medical Images. J Biomed Phys Eng 2021; 11:415-424. [PMID: 34458189 PMCID: PMC8385212 DOI: 10.31661/jbpe.v0i0.958] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Accepted: 08/14/2018] [Indexed: 11/29/2022]
Abstract
Nowadays, medical image modalities are almost available everywhere. These modalities are bases of diagnosis of various diseases sensitive to specific tissue type.
Usually physicians look for abnormalities in these modalities in diagnostic procedures. Count and volume of abnormalities are very important for optimal treatment of patients.
Segmentation is a preliminary step for these measurements and also further analysis. Manual segmentation of abnormalities is cumbersome, error prone, and subjective. As a result,
automated segmentation of abnormal tissue is a need. In this study, representative techniques for segmentation of abnormal tissues are reviewed. Main focus is on the segmentation of
multiple sclerosis lesions, breast cancer masses, lung nodules, and skin lesions. As experimental results demonstrate, the methods based on deep learning techniques perform better than
other methods that are usually based on handy feature engineering techniques. Finally, the most common measures to evaluate automated abnormal tissue segmentation methods are reported
Collapse
Affiliation(s)
- Hassan Homayoun
- PhD, Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Kashan, Kashan, Iran
| | - Hossein Ebrahimpour-komleh
- PhD, Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Kashan, Kashan, Iran
| |
Collapse
|
19
|
Koley S, Dutta PK, Aganj I. Radius-optimized efficient template matching for lesion detection from brain images. Sci Rep 2021; 11:11586. [PMID: 34078935 PMCID: PMC8172536 DOI: 10.1038/s41598-021-90147-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 05/07/2021] [Indexed: 11/09/2022] Open
Abstract
Computer-aided detection of brain lesions from volumetric magnetic resonance imaging (MRI) is in demand for fast and automatic diagnosis of neural diseases. The template-matching technique can provide satisfactory outcome for automatic localization of brain lesions; however, finding the optimal template size that maximizes similarity of the template and the lesion remains challenging. This increases the complexity of the algorithm and the requirement for computational resources, while processing large MRI volumes with three-dimensional (3D) templates. Hence, reducing the computational complexity of template matching is needed. In this paper, we first propose a mathematical framework for computing the normalized cross-correlation coefficient (NCCC) as the similarity measure between the MRI volume and approximated 3D Gaussian template with linear time complexity, [Formula: see text], as opposed to the conventional fast Fourier transform (FFT) based approach with the complexity [Formula: see text], where [Formula: see text] is the number of voxels in the image and [Formula: see text] is the number of tried template radii. We then propose a mathematical formulation to analytically estimate the optimal template radius for each voxel in the image and compute the NCCC with the location-dependent optimal radius, reducing the complexity to [Formula: see text]. We test our methods on one synthetic and two real multiple-sclerosis databases, and compare their performances in lesion detection with FFT and a state-of-the-art lesion prediction algorithm. We demonstrate through our experiments the efficiency of the proposed methods for brain lesion detection and their comparable performance with existing techniques.
Collapse
Affiliation(s)
- Subhranil Koley
- School of Medical Science and Technology, Indian Institute of Technology Kharagpur, Kharagpur, WB, 721302, India.
| | - Pranab K Dutta
- Electrical Engineering Department, Indian Institute of Technology Kharagpur, Kharagpur, WB, 721302, India
| | - Iman Aganj
- Athinoula A. Martinos Center for Biomedical Imaging, Radiology Department, Massachusetts General Hospital, Harvard Medical School, 149 13th St., Suite 2301, Charlestown, MA, 02129, USA.,Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St., Cambridge, MA, 02139, USA
| |
Collapse
|
20
|
Automated Spleen Injury Detection Using 3D Active Contours and Machine Learning. ENTROPY 2021; 23:e23040382. [PMID: 33804831 PMCID: PMC8063804 DOI: 10.3390/e23040382] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 03/20/2021] [Accepted: 03/22/2021] [Indexed: 12/18/2022]
Abstract
The spleen is one of the most frequently injured organs in blunt abdominal trauma. Computed tomography (CT) is the imaging modality of choice to assess patients with blunt spleen trauma, which may include lacerations, subcapsular or parenchymal hematomas, active hemorrhage, and vascular injuries. While computer-assisted diagnosis systems exist for other conditions assessed using CT scans, the current method to detect spleen injuries involves the manual review of scans by radiologists, which is a time-consuming and repetitive process. In this study, we propose an automated spleen injury detection method using machine learning. CT scans from patients experiencing traumatic injuries were collected from Michigan Medicine and the Crash Injury Research Engineering Network (CIREN) dataset. Ninety-nine scans of healthy and lacerated spleens were split into disjoint training and test sets, with random forest (RF), naive Bayes, SVM, k-nearest neighbors (k-NN) ensemble, and subspace discriminant ensemble models trained via 5-fold cross validation. Of these models, random forest performed the best, achieving an Area Under the receiver operating characteristic Curve (AUC) of 0.91 and an F1 score of 0.80 on the test set. These results suggest that an automated, quantitative assessment of traumatic spleen injury has the potential to enable faster triage and improve patient outcomes.
Collapse
|
21
|
Kanber B, Vos SB, de Tisi J, Wood TC, Barker GJ, Rodionov R, Chowdhury FA, Thom M, Alexander DC, Duncan JS, Winston GP. Detection of covert lesions in focal epilepsy using computational analysis of multimodal magnetic resonance imaging data. Epilepsia 2021; 62:807-816. [PMID: 33567113 PMCID: PMC8436754 DOI: 10.1111/epi.16836] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 12/24/2020] [Accepted: 01/21/2021] [Indexed: 02/01/2023]
Abstract
Objective To compare the location of suspect lesions detected by computational analysis of multimodal magnetic resonance imaging data with areas of seizure onset, early propagation, and interictal epileptiform discharges (IEDs) identified with stereoelectroencephalography (SEEG) in a cohort of patients with medically refractory focal epilepsy and radiologically normal magnetic resonance imaging (MRI) scans. Methods We developed a method of lesion detection using computational analysis of multimodal MRI data in a cohort of 62 control subjects, and 42 patients with focal epilepsy and MRI‐visible lesions. We then applied it to detect covert lesions in 27 focal epilepsy patients with radiologically normal MRI scans, comparing our findings with the areas of seizure onset, early propagation, and IEDs identified at SEEG. Results Seizure‐onset zones (SoZs) were identified at SEEG in 18 of the 27 patients (67%) with radiologically normal MRI scans. In 11 of these 18 cases (61%), concordant abnormalities were detected by our method. In the remaining seven cases, either early seizure propagation or IEDs were observed within the abnormalities detected, or there were additional areas of imaging abnormalities found by our method that were not sampled at SEEG. In one of the nine patients (11%) in whom SEEG was inconclusive, an abnormality, which may have been involved in seizures, was identified by our method and was not sampled at SEEG. Significance Computational analysis of multimodal MRI data revealed covert abnormalities in the majority of patients with refractory focal epilepsy and radiologically normal MRI that co‐located with SEEG defined zones of seizure onset. The method could help identify areas that should be targeted with SEEG when considering epilepsy surgery.
Collapse
Affiliation(s)
- Baris Kanber
- Centre for Medical Image Computing, University College London, London, UK.,Department of Clinical and Experimental Epilepsy, UCL Queen Square Institute of Neurology, London, UK.,MRI Unit, Epilepsy Society, Chalfont St Peter, UK.,National Institute for Health Research Biomedical Research Centre at University College London and University College London NHS Foundation Trust, London, UK
| | - Sjoerd B Vos
- Centre for Medical Image Computing, University College London, London, UK.,Department of Clinical and Experimental Epilepsy, UCL Queen Square Institute of Neurology, London, UK.,MRI Unit, Epilepsy Society, Chalfont St Peter, UK.,National Institute for Health Research Biomedical Research Centre at University College London and University College London NHS Foundation Trust, London, UK.,Neuroradiological Academic Unit, UCL Queen Square Institute of Neurology, London, UK
| | - Jane de Tisi
- Department of Clinical and Experimental Epilepsy, UCL Queen Square Institute of Neurology, London, UK
| | - Tobias C Wood
- Department of Neuroimaging, King's College London, London, UK
| | - Gareth J Barker
- Department of Neuroimaging, King's College London, London, UK
| | - Roman Rodionov
- Department of Clinical and Experimental Epilepsy, UCL Queen Square Institute of Neurology, London, UK.,MRI Unit, Epilepsy Society, Chalfont St Peter, UK
| | - Fahmida Amin Chowdhury
- Department of Clinical and Experimental Epilepsy, UCL Queen Square Institute of Neurology, London, UK
| | - Maria Thom
- Department of Clinical and Experimental Epilepsy, UCL Queen Square Institute of Neurology, London, UK.,Division of Neuropathology, The National Hospital for Neurology and Neurosurgery, London, UK
| | - Daniel C Alexander
- Centre for Medical Image Computing, University College London, London, UK.,National Institute for Health Research Biomedical Research Centre at University College London and University College London NHS Foundation Trust, London, UK
| | - John S Duncan
- Department of Clinical and Experimental Epilepsy, UCL Queen Square Institute of Neurology, London, UK.,MRI Unit, Epilepsy Society, Chalfont St Peter, UK.,National Institute for Health Research Biomedical Research Centre at University College London and University College London NHS Foundation Trust, London, UK
| | - Gavin P Winston
- Department of Clinical and Experimental Epilepsy, UCL Queen Square Institute of Neurology, London, UK.,MRI Unit, Epilepsy Society, Chalfont St Peter, UK.,Department of Medicine, Division of Neurology, Queen's University, Kingston, Canada
| |
Collapse
|
22
|
Gryska E, Schneiderman J, Björkman-Burtscher I, Heckemann RA. Automatic brain lesion segmentation on standard magnetic resonance images: a scoping review. BMJ Open 2021; 11:e042660. [PMID: 33514580 PMCID: PMC7849889 DOI: 10.1136/bmjopen-2020-042660] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Revised: 01/09/2021] [Accepted: 01/12/2021] [Indexed: 12/11/2022] Open
Abstract
OBJECTIVES Medical image analysis practices face challenges that can potentially be addressed with algorithm-based segmentation tools. In this study, we map the field of automatic MR brain lesion segmentation to understand the clinical applicability of prevalent methods and study designs, as well as challenges and limitations in the field. DESIGN Scoping review. SETTING Three databases (PubMed, IEEE Xplore and Scopus) were searched with tailored queries. Studies were included based on predefined criteria. Emerging themes during consecutive title, abstract, methods and whole-text screening were identified. The full-text analysis focused on materials, preprocessing, performance evaluation and comparison. RESULTS Out of 2990 unique articles identified through the search, 441 articles met the eligibility criteria, with an estimated growth rate of 10% per year. We present a general overview and trends in the field with regard to publication sources, segmentation principles used and types of lesions. Algorithms are predominantly evaluated by measuring the agreement of segmentation results with a trusted reference. Few articles describe measures of clinical validity. CONCLUSIONS The observed reporting practices leave room for improvement with a view to studying replication, method comparison and clinical applicability. To promote this improvement, we propose a list of recommendations for future studies in the field.
Collapse
Affiliation(s)
- Emilia Gryska
- Medical Radiation Sciences, Goteborgs universitet Institutionen for kliniska vetenskaper, Goteborg, Sweden
| | - Justin Schneiderman
- Sektionen för klinisk neurovetenskap, Goteborgs Universitet Institutionen for Neurovetenskap och fysiologi, Goteborg, Sweden
| | | | - Rolf A Heckemann
- Medical Radiation Sciences, Goteborgs universitet Institutionen for kliniska vetenskaper, Goteborg, Sweden
| |
Collapse
|
23
|
Nalepa J. AIM and Brain Tumors. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_284-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
24
|
DIKA-Nets: Domain-invariant knowledge-guided attention networks for brain skull stripping of early developing macaques. Neuroimage 2020; 227:117649. [PMID: 33338616 DOI: 10.1016/j.neuroimage.2020.117649] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 12/02/2020] [Accepted: 12/03/2020] [Indexed: 01/18/2023] Open
Abstract
As non-human primates, macaques have a close phylogenetic relationship to human beings and have been proven to be a valuable and widely used animal model in human neuroscience research. Accurate skull stripping (aka. brain extraction) of brain magnetic resonance imaging (MRI) is a crucial prerequisite in neuroimaging analysis of macaques. Most of the current skull stripping methods can achieve satisfactory results for human brains, but when applied to macaque brains, especially during early brain development, the results are often unsatisfactory. In fact, the early dynamic, regionally-heterogeneous development of macaque brains, accompanied by poor and age-related contrast between different anatomical structures, poses significant challenges for accurate skull stripping. To overcome these challenges, we propose a fully-automated framework to effectively fuse the age-specific intensity information and domain-invariant prior knowledge as important guiding information for robust skull stripping of developing macaques from 0 to 36 months of age. Specifically, we generate Signed Distance Map (SDM) and Center of Gravity Distance Map (CGDM) based on the intermediate segmentation results as guidance. Instead of using local convolution, we fuse all information using the Dual Self-Attention Module (DSAM), which can capture global spatial and channel-dependent information of feature maps. To extensively evaluate the performance, we adopt two relatively-large challenging MRI datasets from rhesus macaques and cynomolgus macaques, respectively, with a total of 361 scans from two different scanners with different imaging protocols. We perform cross-validation by using one dataset for training and the other one for testing. Our method outperforms five popular brain extraction tools and three deep-learning-based methods on cross-source MRI datasets without any transfer learning.
Collapse
|
25
|
Hill I, Palombo M, Santin M, Branzoli F, Philippe AC, Wassermann D, Aigrot MS, Stankoff B, Baron-Van Evercooren A, Felfli M, Langui D, Zhang H, Lehericy S, Petiet A, Alexander DC, Ciccarelli O, Drobnjak I. Machine learning based white matter models with permeability: An experimental study in cuprizone treated in-vivo mouse model of axonal demyelination. Neuroimage 2020; 224:117425. [PMID: 33035669 DOI: 10.1016/j.neuroimage.2020.117425] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 09/29/2020] [Accepted: 09/30/2020] [Indexed: 01/14/2023] Open
Abstract
The intra-axonal water exchange time (τi), a parameter associated with axonal permeability, could be an important biomarker for understanding and treating demyelinating pathologies such as Multiple Sclerosis. Diffusion-Weighted MRI (DW-MRI) is sensitive to changes in permeability; however, the parameter has so far remained elusive due to the lack of general biophysical models that incorporate it. Machine learning based computational models can potentially be used to estimate such parameters. Recently, for the first time, a theoretical framework using a random forest (RF) regressor suggests that this is a promising new approach for permeability estimation. In this study, we adopt such an approach and for the first time experimentally investigate it for demyelinating pathologies through direct comparison with histology. We construct a computational model using Monte Carlo simulations and an RF regressor in order to learn a mapping between features derived from DW-MRI signals and ground truth microstructure parameters. We test our model in simulations, and find strong correlations between the predicted and ground truth parameters (intra-axonal volume fraction f: R2 =0.99, τi: R2 =0.84, intrinsic diffusivity d: R2 =0.99). We then apply the model in-vivo, on a controlled cuprizone (CPZ) mouse model of demyelination, comparing the results from two cohorts of mice, CPZ (N=8) and healthy age-matched wild-type (WT, N=8). We find that the RF model estimates sensible microstructure parameters for both groups, matching values found in literature. Furthermore, we perform histology for both groups using electron microscopy (EM), measuring the thickness of the myelin sheath as a surrogate for exchange time. Histology results show that our RF model estimates are very strongly correlated with the EM measurements (ρ = 0.98 for f, ρ = 0.82 for τi). Finally, we find a statistically significant decrease in τi in all three regions of the corpus callosum (splenium/genu/body) of the CPZ cohort (<τi>=310ms/330ms/350ms) compared to the WT group (<τi>=370ms/370ms/380ms). This is in line with our expectations that τi is lower in regions where the myelin sheath is damaged, as axonal membranes become more permeable. Overall, these results demonstrate, for the first time experimentally and in vivo, that a computational model learned from simulations can reliably estimate microstructure parameters, including the axonal permeability .
Collapse
Affiliation(s)
- Ioana Hill
- Centre for Medical Image Computing and Dept of Computer Science, University College London, London, UK
| | - Marco Palombo
- Centre for Medical Image Computing and Dept of Computer Science, University College London, London, UK.
| | - Mathieu Santin
- Institut du Cerveau et de la Moelle épinière, ICM, Sorbonne Université, Inserm 1127, CNRS UMR 7225, F-75013, Paris, France; Institut du Cerveau et de la Moelle épinière, ICM, Centre de NeuroImagerie de Recherche, CENIR, Paris, France
| | - Francesca Branzoli
- Institut du Cerveau et de la Moelle épinière, ICM, Sorbonne Université, Inserm 1127, CNRS UMR 7225, F-75013, Paris, France; Institut du Cerveau et de la Moelle épinière, ICM, Centre de NeuroImagerie de Recherche, CENIR, Paris, France
| | - Anne-Charlotte Philippe
- Institut du Cerveau et de la Moelle épinière, ICM, Sorbonne Université, Inserm 1127, CNRS UMR 7225, F-75013, Paris, France
| | - Demian Wassermann
- Université Côte d'Azur, Inria, Sophia-Antipolis, France; Parietal, CEA, Inria, Saclay, Île-de-France
| | - Marie-Stephane Aigrot
- Institut du Cerveau et de la Moelle épinière, ICM, Sorbonne Université, Inserm 1127, CNRS UMR 7225, F-75013, Paris, France
| | - Bruno Stankoff
- Institut du Cerveau et de la Moelle épinière, ICM, Sorbonne Université, Inserm 1127, CNRS UMR 7225, F-75013, Paris, France; AP-HP, Hôpital Saint-Antoine, Paris, France
| | - Anne Baron-Van Evercooren
- Institut du Cerveau et de la Moelle épinière, ICM, Sorbonne Université, Inserm 1127, CNRS UMR 7225, F-75013, Paris, France
| | - Mehdi Felfli
- Institut du Cerveau et de la Moelle épinière, ICM, Sorbonne Université, Inserm 1127, CNRS UMR 7225, F-75013, Paris, France
| | - Dominique Langui
- Institut du Cerveau et de la Moelle épinière, ICM, Sorbonne Université, Inserm 1127, CNRS UMR 7225, F-75013, Paris, France
| | - Hui Zhang
- Centre for Medical Image Computing and Dept of Computer Science, University College London, London, UK
| | - Stephane Lehericy
- Institut du Cerveau et de la Moelle épinière, ICM, Sorbonne Université, Inserm 1127, CNRS UMR 7225, F-75013, Paris, France; Institut du Cerveau et de la Moelle épinière, ICM, Centre de NeuroImagerie de Recherche, CENIR, Paris, France
| | - Alexandra Petiet
- Institut du Cerveau et de la Moelle épinière, ICM, Sorbonne Université, Inserm 1127, CNRS UMR 7225, F-75013, Paris, France; Institut du Cerveau et de la Moelle épinière, ICM, Centre de NeuroImagerie de Recherche, CENIR, Paris, France
| | - Daniel C Alexander
- Centre for Medical Image Computing and Dept of Computer Science, University College London, London, UK
| | - Olga Ciccarelli
- Dept. of Neuroinflammation, University College London, Queen Square Institute of Neurology, University College London, London, UK
| | - Ivana Drobnjak
- Centre for Medical Image Computing and Dept of Computer Science, University College London, London, UK
| |
Collapse
|
26
|
Martins SB, Telea AC, Falcão AX. Investigating the impact of supervoxel segmentation for unsupervised abnormal brain asymmetry detection. Comput Med Imaging Graph 2020; 85:101770. [PMID: 32854021 DOI: 10.1016/j.compmedimag.2020.101770] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 07/27/2020] [Accepted: 07/31/2020] [Indexed: 11/26/2022]
Abstract
Several brain disorders are associated with abnormal brain asymmetries (asymmetric anomalies). Several computer-based methods aim to detect such anomalies automatically. Recent advances in this area use automatic unsupervised techniques that extract pairs of symmetric supervoxels in the hemispheres, model normal brain asymmetries for each pair from healthy subjects, and treat outliers as anomalies. Yet, there is no deep understanding of the impact of the supervoxel segmentation quality for abnormal asymmetry detection, especially for small anomalies, nor of the added value of using a specialized model for each supervoxel pair instead of a single global appearance model. We aim to answer these questions by a detailed evaluation of different scenarios for supervoxel segmentation and classification for detecting abnormal brain asymmetries. Experimental results on 3D MR-T1 brain images of stroke patients confirm the importance of high-quality supervoxels fit anomalies and the use of a specific classifier for each supervoxel. Next, we present a refinement of the detection method that reduces the number of false-positive supervoxels, thereby making the detection method easier to use for visual inspection and analysis of the found anomalies.
Collapse
Affiliation(s)
- Samuel B Martins
- Laboratory of Image Data Science (LIDS), Institute of Computing, University of Campinas, Brazil; Bernoulli Institute, University of Groningen, The Netherlands; Federal Institute of São Paulo, Campinas, Brazil
| | - Alexandru C Telea
- Department of Information and Computing Sciences, Utrecht University, The Netherlands
| | - Alexandre X Falcão
- Laboratory of Image Data Science (LIDS), Institute of Computing, University of Campinas, Brazil
| |
Collapse
|
27
|
Multimodal MRI Brain Tumor Image Segmentation Using Sparse Subspace Clustering Algorithm. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:8620403. [PMID: 32714431 PMCID: PMC7355351 DOI: 10.1155/2020/8620403] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 05/24/2020] [Accepted: 06/08/2020] [Indexed: 11/17/2022]
Abstract
Brain tumors are one of the most deadly diseases with a high mortality rate. The shape and size of the tumor are random during the growth process. Brain tumor segmentation is a brain tumor assisted diagnosis technology that separates different brain tumor structures such as edema and active and tumor necrosis tissues from normal brain tissue. Magnetic resonance imaging (MRI) technology has the advantages of no radiation impact on the human body, good imaging effect on structural tissues, and an ability to realize tomographic imaging of any orientation. Therefore, doctors often use MRI brain tumor images to analyze and process brain tumors. In these images, the tumor structure is only characterized by grayscale changes, and the developed images obtained by different equipment and different conditions may also be different. This makes it difficult for traditional image segmentation methods to deal well with the segmentation of brain tumor images. Considering that the traditional single-mode MRI brain tumor images contain incomplete brain tumor information, it is difficult to segment the single-mode brain tumor images to meet clinical needs. In this paper, a sparse subspace clustering (SSC) algorithm is introduced to process the diagnosis of multimodal MRI brain tumor images. In the absence of added noise, the proposed algorithm has better advantages than traditional methods. Compared with the top 15 in the Brats 2015 competition, the accuracy is not much different, being basically stable between 10 and 15. In order to verify the noise resistance of the proposed algorithm, this paper adds 5%, 10%, 15%, and 20% Gaussian noise to the test image. Experimental results show that the proposed algorithm has better noise immunity than a comparable algorithm.
Collapse
|
28
|
Shan H, Jia X, Yan P, Li Y, Paganetti H, Wang G. Synergizing medical imaging and radiotherapy with deep learning. MACHINE LEARNING-SCIENCE AND TECHNOLOGY 2020. [DOI: 10.1088/2632-2153/ab869f] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
29
|
Liu Y, Nacewicz BM, Zhao G, Adluru N, Kirk GR, Ferrazzano PA, Styner MA, Alexander AL. A 3D Fully Convolutional Neural Network With Top-Down Attention-Guided Refinement for Accurate and Robust Automatic Segmentation of Amygdala and Its Subnuclei. Front Neurosci 2020; 14:260. [PMID: 32508558 PMCID: PMC7253589 DOI: 10.3389/fnins.2020.00260] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2019] [Accepted: 03/09/2020] [Indexed: 12/17/2022] Open
Abstract
Recent advances in deep learning have improved the segmentation accuracy of subcortical brain structures, which would be useful in neuroimaging studies of many neurological disorders. However, most existing deep learning based approaches in neuroimaging do not investigate the specific difficulties that exist in segmenting extremely small but important brain regions such as the subnuclei of the amygdala. To tackle this challenging task, we developed a dual-branch dilated residual 3D fully convolutional network with parallel convolutions to extract more global context and alleviate the class imbalance issue by maintaining a small receptive field that is just the size of the regions of interest (ROIs). We also conduct multi-scale feature fusion in both parallel and series to compensate the potential information loss during convolutions, which has been shown to be important for small objects. The serial feature fusion enabled by residual connections is further enhanced by a proposed top-down attention-guided refinement unit, where the high-resolution low-level spatial details are selectively integrated to complement the high-level but coarse semantic information, enriching the final feature representations. As a result, the segmentations resulting from our method are more accurate both volumetrically and morphologically, compared with other deep learning based approaches. To the best of our knowledge, this work is the first deep learning-based approach that targets the subregions of the amygdala. We also demonstrated the feasibility of using a cycle-consistent generative adversarial network (CycleGAN) to harmonize multi-site MRI data, and show that our method generalizes well to challenging traumatic brain injury (TBI) datasets collected from multiple centers. This appears to be a promising strategy for image segmentation for multiple site studies and increased morphological variability from significant brain pathology.
Collapse
Affiliation(s)
- Yilin Liu
- Waisman Brain Imaging Laboratory, University of Wisconsin-Madison, Madison, WI, United States
| | - Brendon M. Nacewicz
- Department of Psychiatry, University of Wisconsin-Madison, Madison, WI, United States
| | - Gengyan Zhao
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, United States
| | - Nagesh Adluru
- Waisman Brain Imaging Laboratory, University of Wisconsin-Madison, Madison, WI, United States
| | - Gregory R. Kirk
- Waisman Brain Imaging Laboratory, University of Wisconsin-Madison, Madison, WI, United States
| | - Peter A. Ferrazzano
- Waisman Brain Imaging Laboratory, University of Wisconsin-Madison, Madison, WI, United States
- Department of Pediatrics, University of Wisconsin-Madison, Madison, WI, United States
| | - Martin A. Styner
- Department of Psychiatry, University of North Carolina-Chapel Hill, Chapel Hill, NC, United States
- Department of Computer Science, University of North Carolina-Chapel Hill, Chapel Hill, NC, United States
| | - Andrew L. Alexander
- Waisman Brain Imaging Laboratory, University of Wisconsin-Madison, Madison, WI, United States
- Department of Psychiatry, University of Wisconsin-Madison, Madison, WI, United States
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, United States
| |
Collapse
|
30
|
Chen G, Li Q, Shi F, Rekik I, Pan Z. RFDCR: Automated brain lesion segmentation using cascaded random forests with dense conditional random fields. Neuroimage 2020; 211:116620. [DOI: 10.1016/j.neuroimage.2020.116620] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 01/11/2020] [Accepted: 02/06/2020] [Indexed: 10/25/2022] Open
|
31
|
Chen X, You S, Tezcan KC, Konukoglu E. Unsupervised lesion detection via image restoration with a normative prior. Med Image Anal 2020; 64:101713. [PMID: 32492582 DOI: 10.1016/j.media.2020.101713] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 04/14/2020] [Accepted: 04/22/2020] [Indexed: 10/24/2022]
Abstract
Unsupervised lesion detection is a challenging problem that requires accurately estimating normative distributions of healthy anatomy and detecting lesions as outliers without training examples. Recently, this problem has received increased attention from the research community following the advances in unsupervised learning with deep learning. Such advances allow the estimation of high-dimensional distributions, such as normative distributions, with higher accuracy than previous methods. The main approach of the recently proposed methods is to learn a latent-variable model parameterized with networks to approximate the normative distribution using example images showing healthy anatomy, perform prior-projection, i.e. reconstruct the image with lesions using the latent-variable model, and determine lesions based on the differences between the reconstructed and original images. While being promising, the prior-projection step often leads to a large number of false positives. In this work, we approach unsupervised lesion detection as an image restoration problem and propose a probabilistic model that uses a network-based prior as the normative distribution and detect lesions pixel-wise using MAP estimation. The probabilistic model punishes large deviations between restored and original images, reducing false positives in pixel-wise detections. Experiments with gliomas and stroke lesions in brain MRI using publicly available datasets show that the proposed approach outperforms the state-of-the-art unsupervised methods by a substantial margin, +0.13 (AUC), for both glioma and stroke detection. Extensive model analysis confirms the effectiveness of MAP-based image restoration.
Collapse
Affiliation(s)
- Xiaoran Chen
- Computer Vision Laboratory, ETH Zürich, Sternwartstrasse 7, Zürich, 8092, Switzerland.
| | - Suhang You
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, Bern, 3008, Switzerland.
| | - Kerem Can Tezcan
- Computer Vision Laboratory, ETH Zürich, Sternwartstrasse 7, Zürich, 8092, Switzerland.
| | - Ender Konukoglu
- Computer Vision Laboratory, ETH Zürich, Sternwartstrasse 7, Zürich, 8092, Switzerland.
| |
Collapse
|
32
|
Multiplex bioimaging of single-cell spatial profiles for precision cancer diagnostics and therapeutics. NPJ Precis Oncol 2020; 4:11. [PMID: 32377572 PMCID: PMC7195402 DOI: 10.1038/s41698-020-0114-1] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Accepted: 03/05/2020] [Indexed: 12/13/2022] Open
Abstract
Cancers exhibit functional and structural diversity in distinct patients. In this mass, normal and malignant cells create tumor microenvironment that is heterogeneous among patients. A residue from primary tumors leaks into the bloodstream as cell clusters and single cells, providing clues about disease progression and therapeutic response. The complexity of these hierarchical microenvironments needs to be elucidated. Although tumors comprise ample cell types, the standard clinical technique is still the histology that is limited to a single marker. Multiplexed imaging technologies open new directions in pathology. Spatially resolved proteomic, genomic, and metabolic profiles of human cancers are now possible at the single-cell level. This perspective discusses spatial bioimaging methods to decipher the cascade of microenvironments in solid and liquid biopsies. A unique synthesis of top-down and bottom-up analysis methods is presented. Spatial multi-omics profiles can be tailored to precision oncology through artificial intelligence. Data-driven patient profiling enables personalized medicine and beyond.
Collapse
|
33
|
Pang S, Du A, Orgun MA, Yu Z, Wang Y, Wang Y, Liu G. CTumorGAN: a unified framework for automatic computed tomography tumor segmentation. Eur J Nucl Med Mol Imaging 2020; 47:2248-2268. [PMID: 32222809 DOI: 10.1007/s00259-020-04781-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Accepted: 03/19/2020] [Indexed: 01/05/2023]
Abstract
PURPOSE Unlike the normal organ segmentation task, automatic tumor segmentation is a more challenging task because of the existence of similar visual characteristics between tumors and their surroundings, especially on computed tomography (CT) images with severe low contrast resolution, as well as the diversity and individual characteristics of data acquisition procedures and devices. Consequently, most of the recently proposed methods have become increasingly difficult to be applied on a different tumor dataset with good results, and moreover, some tumor segmentors usually fail to generalize beyond those datasets and modalities used in their original evaluation experiments. METHODS In order to alleviate some of the problems with the recently proposed methods, we propose a novel unified and end-to-end adversarial learning framework for automatic segmentation of any kinds of tumors from CT scans, called CTumorGAN, consisting of a Generator network and a Discriminator network. Specifically, the Generator attempts to generate segmentation results that are close to their corresponding golden standards, while the Discriminator aims to distinguish between generated samples and real tumor ground truths. More importantly, we deliberately design different modules to take into account the well-known obstacles, e.g., severe class imbalance, small tumor localization, and the label noise problem with poor expert annotation quality, and then use these modules to guide the CTumorGAN training process by utilizing multi-level supervision more effectively. RESULTS We conduct a comprehensive evaluation on diverse loss functions for tumor segmentation and find that mean square error is more suitable for the CT tumor segmentation task. Furthermore, extensive experiments with multiple evaluation criteria on three well-established datasets, including lung tumor, kidney tumor, and liver tumor databases, also demonstrate that our CTumorGAN achieves stable and competitive performance compared with the state-of-the-art approaches for CT tumor segmentation. CONCLUSION In order to overcome those key challenges arising from CT datasets and solve some of the main problems existing in the current deep learning-based methods, we propose a novel unified CTumorGAN framework, which can be effectively generalized to address any kinds of tumor datasets with superior performance.
Collapse
Affiliation(s)
- Shuchao Pang
- Department of Computing, Macquarie University, Sydney, NSW, 2109, Australia
| | - Anan Du
- School of Electrical and Data Engineering, University of Technology Sydney, Ultimo, NSW, 2007, Australia
| | - Mehmet A Orgun
- Department of Computing, Macquarie University, Sydney, NSW, 2109, Australia. .,Faculty of Information Technology, Macau University of Science and Technology, Avenida Wai Long, Taipa, 999078, Macau, China.
| | - Zhenmei Yu
- School of Data and Computer Science, Shandong Women's University, Jinan, 250014, China
| | - Yunyun Wang
- Department of Anesthesiology, China-Japan Union Hospital of Jilin University, Changchun, 130012, China
| | - Yan Wang
- Department of Computing, Macquarie University, Sydney, NSW, 2109, Australia
| | - Guanfeng Liu
- Department of Computing, Macquarie University, Sydney, NSW, 2109, Australia
| |
Collapse
|
34
|
Brugnara G, Isensee F, Neuberger U, Bonekamp D, Petersen J, Diem R, Wildemann B, Heiland S, Wick W, Bendszus M, Maier-Hein K, Kickingereder P. Automated volumetric assessment with artificial neural networks might enable a more accurate assessment of disease burden in patients with multiple sclerosis. Eur Radiol 2020; 30:2356-2364. [PMID: 31900702 DOI: 10.1007/s00330-019-06593-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 11/09/2019] [Accepted: 11/13/2019] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Patients with multiple sclerosis (MS) regularly undergo MRI for assessment of disease burden. However, interpretation may be time consuming and prone to intra- and interobserver variability. Here, we evaluate the potential of artificial neural networks (ANN) for automated volumetric assessment of MS disease burden and activity on MRI. METHODS A single-institutional dataset with 334 MS patients (334 MRI exams) was used to develop and train an ANN for automated identification and volumetric segmentation of T2/FLAIR-hyperintense and contrast-enhancing (CE) lesions. Independent testing was performed in a single-institutional longitudinal dataset with 82 patients (266 MRI exams). We evaluated lesion detection performance (F1 scores), lesion segmentation agreement (DICE coefficients), and lesion volume agreement (concordance correlation coefficients [CCC]). Independent evaluation was performed on the public ISBI-2015 challenge dataset. RESULTS The F1 score was maximized in the training set at a detection threshold of 7 mm3 for T2/FLAIR lesions and 14 mm3 for CE lesions. In the training set, mean F1 scores were 0.867 for T2/FLAIR lesions and 0.636 for CE lesions, as compared to 0.878 for T2/FLAIR lesions and 0.715 for CE lesions in the test set. Using these thresholds, the ANN yielded mean DICE coefficients of 0.834 and 0.878 for segmentation of T2/FLAIR and CE lesions in the training set (fivefold cross-validation). Corresponding DICE coefficients in the test set were 0.846 for T2/FLAIR lesions and 0.908 for CE lesions, and the CCC was ≥ 0.960 in each dataset. CONCLUSIONS Our results highlight the capability of ANN for quantitative state-of-the-art assessment of volumetric lesion load on MRI and potentially enable a more accurate assessment of disease burden in patients with MS. KEY POINTS • Artificial neural networks (ANN) can accurately detect and segment both T2/FLAIR and contrast-enhancing MS lesions in MRI data. • Performance of the ANN was consistent in a clinically derived dataset, with patients presenting all possible disease stages in MRI scans acquired from standard clinical routine rather than with high-quality research sequences. • Computer-aided evaluation of MS with ANN could streamline both clinical and research procedures in the volumetric assessment of MS disease burden as well as in lesion detection.
Collapse
Affiliation(s)
- Gianluca Brugnara
- Department of Neuroradiology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Fabian Isensee
- Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Ulf Neuberger
- Department of Neuroradiology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - David Bonekamp
- Department of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jens Petersen
- Department of Neuroradiology, University of Heidelberg Medical Center, Heidelberg, Germany
- Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Ricarda Diem
- Department of Neurology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Brigitte Wildemann
- Department of Neurology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Sabine Heiland
- Department of Neuroradiology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Wolfgang Wick
- Department of Neurology, University of Heidelberg Medical Center, Heidelberg, Germany
- Clinical Cooperation Unit Neurooncology, German Cancer Consortium (DKTK), DKFZ, Heidelberg, Germany
| | - Martin Bendszus
- Department of Neuroradiology, University of Heidelberg Medical Center, Heidelberg, Germany
| | - Klaus Maier-Hein
- Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Philipp Kickingereder
- Department of Neuroradiology, University of Heidelberg Medical Center, Heidelberg, Germany.
| |
Collapse
|
35
|
Nalepa J, Ribalta Lorenzo P, Marcinkiewicz M, Bobek-Billewicz B, Wawrzyniak P, Walczak M, Kawulok M, Dudzik W, Kotowski K, Burda I, Machura B, Mrukwa G, Ulrych P, Hayball MP. Fully-automated deep learning-powered system for DCE-MRI analysis of brain tumors. Artif Intell Med 2020; 102:101769. [DOI: 10.1016/j.artmed.2019.101769] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Revised: 10/28/2019] [Accepted: 11/20/2019] [Indexed: 02/01/2023]
|
36
|
Cetin O, Seymen V, Sakoglu U. Multiple sclerosis lesion detection in multimodal MRI using simple clustering-based segmentation and classification. INFORMATICS IN MEDICINE UNLOCKED 2020. [DOI: 10.1016/j.imu.2020.100409] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|
37
|
Salem M, Valverde S, Cabezas M, Pareto D, Oliver A, Salvi J, Rovira À, Lladó X. A fully convolutional neural network for new T2-w lesion detection in multiple sclerosis. NEUROIMAGE-CLINICAL 2019; 25:102149. [PMID: 31918065 PMCID: PMC7036701 DOI: 10.1016/j.nicl.2019.102149] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 12/23/2019] [Accepted: 12/26/2019] [Indexed: 11/17/2022]
Abstract
A deep learning model for new T2-w lesions detection in multiple sclerosis is presented. Combining a learning-based registration network with a segmentation one increases the performance. The proposed model decreases false-positives while increasing true-positives. Better performance compared to other supervised and unsupervised state-of-the-art approaches.
Introduction: Longitudinal magnetic resonance imaging (MRI) has an important role in multiple sclerosis (MS) diagnosis and follow-up. Specifically, the presence of new T2-w lesions on brain MR scans is considered a predictive biomarker for the disease. In this study, we propose a fully convolutional neural network (FCNN) to detect new T2-w lesions in longitudinal brain MR images. Methods: One year apart, multichannel brain MR scans (T1-w, T2-w, PD-w, and FLAIR) were obtained for 60 patients, 36 of them with new T2-w lesions. Modalities from both temporal points were preprocessed and linearly coregistered. Afterwards, an FCNN, whose inputs were from the baseline and follow-up images, was trained to detect new MS lesions. The first part of the network consisted of U-Net blocks that learned the deformation fields (DFs) and nonlinearly registered the baseline image to the follow-up image for each input modality. The learned DFs together with the baseline and follow-up images were then fed to the second part, another U-Net that performed the final detection and segmentation of new T2-w lesions. The model was trained end-to-end, simultaneously learning both the DFs and the new T2-w lesions, using a combined loss function. We evaluated the performance of the model following a leave-one-out cross-validation scheme. Results: In terms of the detection of new lesions, we obtained a mean Dice similarity coefficient of 0.83 with a true positive rate of 83.09% and a false positive detection rate of 9.36%. In terms of segmentation, we obtained a mean Dice similarity coefficient of 0.55. The performance of our model was significantly better compared to the state-of-the-art methods (p < 0.05). Conclusions: Our proposal shows the benefits of combining a learning-based registration network with a segmentation network. Compared to other methods, the proposed model decreases the number of false positives. During testing, the proposed model operates faster than the other two state-of-the-art methods based on the DF obtained by Demons.
Collapse
Affiliation(s)
- Mostafa Salem
- Research Institute of Computer Vision and Robotics, University of Girona, Spain; Computer Science Department, Faculty of Computers and Information, Assiut University, Egypt.
| | - Sergi Valverde
- Research Institute of Computer Vision and Robotics, University of Girona, Spain
| | - Mariano Cabezas
- Research Institute of Computer Vision and Robotics, University of Girona, Spain
| | - Deborah Pareto
- Magnetic Resonance Unit, Dept of Radiology, Vall d'Hebron University Hospital, Spain
| | - Arnau Oliver
- Research Institute of Computer Vision and Robotics, University of Girona, Spain
| | - Joaquim Salvi
- Research Institute of Computer Vision and Robotics, University of Girona, Spain
| | - Àlex Rovira
- Magnetic Resonance Unit, Dept of Radiology, Vall d'Hebron University Hospital, Spain
| | - Xavier Lladó
- Research Institute of Computer Vision and Robotics, University of Girona, Spain
| |
Collapse
|
38
|
Sundaresan V, Zamboni G, Le Heron C, Rothwell PM, Husain M, Battaglini M, De Stefano N, Jenkinson M, Griffanti L. Automated lesion segmentation with BIANCA: Impact of population-level features, classification algorithm and locally adaptive thresholding. Neuroimage 2019; 202:116056. [PMID: 31376518 PMCID: PMC6996003 DOI: 10.1016/j.neuroimage.2019.116056] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Revised: 06/19/2019] [Accepted: 07/24/2019] [Indexed: 11/24/2022] Open
Abstract
White matter hyperintensities (WMH) or white matter lesions exhibit high variability in their characteristics both at population- and subject-level, making their detection a challenging task. Population-level factors such as age, vascular risk factors and neurodegenerative diseases affect lesion load and spatial distribution. At the individual level, WMH vary in contrast, amount and distribution in different white matter regions. In this work, we aimed to improve BIANCA, the FSL tool for WMH segmentation, in order to better deal with these sources of variability. We worked on two stages of BIANCA by improving the lesion probability map estimation (classification stage) and making the lesion probability map thresholding stage automated and adaptive to local lesion probabilities. Firstly, in order to take into account the effect of population-level factors, we included population-level lesion probabilities, modelled with respect to a parametric factor (e.g. age), in the classification stage. Secondly, we tested BIANCA performance when using four alternative classifiers commonly used in the literature with respect to K-nearest neighbour algorithm (currently used for lesion probability map estimation in BIANCA). Finally, we propose LOCally Adaptive Threshold Estimation (LOCATE), a supervised method for determining optimal local thresholds to apply to the estimated lesion probability map, as an alternative option to global thresholding (i.e. applying the same threshold to the entire lesion probability map). For these experiments we used data from a neurodegenerative cohort, a vascular cohort and the cohorts available publicly as a part of a segmentation challenge. We observed that including population-level parametric lesion probabilities with respect to age and using alternative machine learning techniques provided negligible improvement. However, LOCATE provided a substantial improvement in the lesion segmentation performance, when compared to the global thresholding. It allowed to detect more deep lesions and provided better segmentation of periventricular lesion boundaries, despite the differences in the lesion spatial distribution and load across datasets. We further validated LOCATE on a cohort of CADASIL (Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy) patients, a genetic form of cerebral small vessel disease, and healthy controls, showing that LOCATE adapts well to wide variations in lesion load and spatial distribution.
Collapse
Affiliation(s)
- Vaanathi Sundaresan
- Wellcome Centre for Integrative Neuroimaging, Oxford Centre for Functional MRI of the Brain, Nuffield Department of Clinical Neurosciences, University of Oxford, UK; Oxford-Nottingham Centre for Doctoral Training in Biomedical Imaging, University of Oxford, UK; Oxford India Centre for Sustainable Development, Somerville College, University of Oxford, UK.
| | - Giovanna Zamboni
- Wellcome Centre for Integrative Neuroimaging, Oxford Centre for Functional MRI of the Brain, Nuffield Department of Clinical Neurosciences, University of Oxford, UK; Centre for Prevention of Stroke and Dementia, Nuffield Department of Clinical Neurosciences, University of Oxford, UK
| | - Campbell Le Heron
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; New Zealand Brain Research Institute, Christchurch 8011, New Zealand
| | - Peter M Rothwell
- Centre for Prevention of Stroke and Dementia, Nuffield Department of Clinical Neurosciences, University of Oxford, UK
| | - Masud Husain
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Department of Experimental Psychology, University of Oxford, Oxford, UK; Wellcome Centre for Integrative NeuroImaging, University of Oxford, UK
| | - Marco Battaglini
- Department of Medicine, Surgery and Neuroscience, University of Siena, Siena, Italy
| | - Nicola De Stefano
- Department of Medicine, Surgery and Neuroscience, University of Siena, Siena, Italy
| | - Mark Jenkinson
- Wellcome Centre for Integrative Neuroimaging, Oxford Centre for Functional MRI of the Brain, Nuffield Department of Clinical Neurosciences, University of Oxford, UK
| | - Ludovica Griffanti
- Wellcome Centre for Integrative Neuroimaging, Oxford Centre for Functional MRI of the Brain, Nuffield Department of Clinical Neurosciences, University of Oxford, UK
| |
Collapse
|
39
|
Ribalta Lorenzo P, Nalepa J, Bobek-Billewicz B, Wawrzyniak P, Mrukwa G, Kawulok M, Ulrych P, Hayball MP. Segmenting brain tumors from FLAIR MRI using fully convolutional neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 176:135-148. [PMID: 31200901 DOI: 10.1016/j.cmpb.2019.05.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2018] [Revised: 04/05/2019] [Accepted: 05/10/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Magnetic resonance imaging (MRI) is an indispensable tool in diagnosing brain-tumor patients. Automated tumor segmentation is being widely researched to accelerate the MRI analysis and allow clinicians to precisely plan treatment-accurate delineation of brain tumors is a critical step in assessing their volume, shape, boundaries, and other characteristics. However, it is still a very challenging task due to inherent MR data characteristics and high variability, e.g., in tumor sizes or shapes. We present a new deep learning approach for accurate brain tumor segmentation which can be trained from small and heterogeneous datasets annotated by a human reader (providing high-quality ground-truth segmentation is very costly in practice). METHODS In this paper, we present a new deep learning technique for segmenting brain tumors from fluid attenuation inversion recovery MRI. Our technique exploits fully convolutional neural networks, and it is equipped with a battery of augmentation techniques that make the algorithm robust against low data quality, and heterogeneity of small training sets. We train our models using only positive (tumorous) examples, due to the limited amount of available data. RESULTS Our algorithm was tested on a set of stage II-IV brain-tumor patients (image data collected using MAGNETOM Prisma 3T, Siemens). Rigorous experiments, backed up with statistical tests, revealed that our approach outperforms the state-of-the-art approach (utilizing hand-crafted features) in terms of segmentation accuracy, offers very fast training and instant segmentation (analysis of an image takes less than a second). Building our deep model is 1.3 times faster compared with extracting features for extremely randomized trees, and this training time can be controlled. Finally, we showed that too aggressive data augmentation may lead to deteriorated performance of the model, especially in the fixed-budget training (with maximum numbers of training epochs). CONCLUSIONS Our method yields the better performance when compared with the state of the art method which utilizes hand-crafted features. In addition, our deep network can be effectively applied to difficult (small, imbalanced, and heterogeneous) datasets, offers controllable training time, and infers in real-time.
Collapse
Affiliation(s)
| | - Jakub Nalepa
- Future Processing, Bojkowska 37A, 44-100 Gliwice, Poland; Institute of Informatics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland.
| | - Barbara Bobek-Billewicz
- Maria Sklodowska-Curie Memorial Cancer Center and Institute of Oncology, Wybrzeze Armii Krajowej 15, 44-102 Gliwice, Poland.
| | - Pawel Wawrzyniak
- Maria Sklodowska-Curie Memorial Cancer Center and Institute of Oncology, Wybrzeze Armii Krajowej 15, 44-102 Gliwice, Poland.
| | | | - Michal Kawulok
- Future Processing, Bojkowska 37A, 44-100 Gliwice, Poland; Institute of Informatics, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland.
| | - Pawel Ulrych
- Maria Sklodowska-Curie Memorial Cancer Center and Institute of Oncology, Wybrzeze Armii Krajowej 15, 44-102 Gliwice, Poland.
| | | |
Collapse
|
40
|
Li H, Parikh NA, Wang J, Merhar S, Chen M, Parikh M, Holland S, He L. Objective and Automated Detection of Diffuse White Matter Abnormality in Preterm Infants Using Deep Convolutional Neural Networks. Front Neurosci 2019; 13:610. [PMID: 31275101 PMCID: PMC6591530 DOI: 10.3389/fnins.2019.00610] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Accepted: 05/28/2019] [Indexed: 11/19/2022] Open
Abstract
Diffuse white matter abnormality (DWMA), or diffuse excessive high signal intensity is observed in 50-80% of very preterm infants at term-equivalent age. It is subjectively defined as higher than normal signal intensity in periventricular and subcortical white matter in comparison to normal unmyelinated white matter on T2-weighted MRI images. Despite the well-documented presence of DWMA, it remains debatable whether DWMA represents pathological tissue injury or a transient developmental phenomenon. Manual tracing of DWMA exhibits poor reliability and reproducibility and unduly increases image processing time. Thus, objective and ideally automatic assessment is critical to accurately elucidate the biologic nature of DWMA. We propose a deep learning approach to automatically identify DWMA regions on T2-weighted MRI images. Specifically, we formulated DWMA detection as an image voxel classification task; that is, the voxels on T2-weighted images are treated as samples and exclusively assigned as DWMA or normal white matter voxel classes. To utilize the spatial information of individual voxels, small image patches centered on the given voxels are retrieved. A deep convolutional neural networks (CNN) model was developed to differentiate DWMA and normal voxels. We tested our deep CNN in multiple validation experiments. First, we examined DWMA detection accuracy of our CNN model using computer simulations. This was followed by in vivo assessments in a cohort of very preterm infants (N = 95) using cross-validation and holdout validation. Finally, we tested our approach on an independent preterm cohort (N = 28) to externally validate our model. Our deep CNN model achieved Dice similarity index values ranging from 0.85 to 0.99 for DWMA detection in the aforementioned validation experiments. Our proposed deep CNN model exhibited significantly better performance than other popular machine learning models. We present an objective and automated approach for accurately identifying DWMA that may facilitate the clinical diagnosis of DWMA in very preterm infants.
Collapse
Affiliation(s)
- Hailong Li
- The Perinatal Institute, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, United States
| | - Nehal A. Parikh
- The Perinatal Institute, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, United States
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
- Department of Pediatrics, Nationwide Children’s Hospital, Columbus, OH, United States
| | - Jinghua Wang
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Stephanie Merhar
- The Perinatal Institute, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, United States
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Ming Chen
- The Perinatal Institute, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, United States
- Department of Electronic Engineering and Computing Systems, University of Cincinnati, Cincinnati, OH, United States
| | - Milan Parikh
- The Perinatal Institute, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, United States
| | - Scott Holland
- Medpace Inc., Cincinnati, OH, United States
- Department of Physics, University of Cincinnati, Cincinnati, OH, United States
| | - Lili He
- The Perinatal Institute, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, United States
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| |
Collapse
|
41
|
Abstract
Manual image segmentation is a time-consuming task routinely performed in radiotherapy to identify each patient's targets and anatomical structures. The efficacy and safety of the radiotherapy plan requires accurate segmentations as these regions of interest are generally used to optimize and assess the quality of the plan. However, reports have shown that this process can be subject to significant inter- and intraobserver variability. Furthermore, the quality of the radiotherapy treatment, and subsequent analyses (ie, radiomics, dosimetric), can be subject to the accuracy of these manual segmentations. Automatic segmentation (or auto-segmentation) of targets and normal tissues is, therefore, preferable as it would address these challenges. Previously, auto-segmentation techniques have been clustered into 3 generations of algorithms, with multiatlas based and hybrid techniques (third generation) being considered the state-of-the-art. More recently, however, the field of medical image segmentation has seen accelerated growth driven by advances in computer vision, particularly through the application of deep learning algorithms, suggesting we have entered the fourth generation of auto-segmentation algorithm development. In this paper, the authors review traditional (nondeep learning) algorithms particularly relevant for applications in radiotherapy. Concepts from deep learning are introduced focusing on convolutional neural networks and fully-convolutional networks which are generally used for segmentation tasks. Furthermore, the authors provide a summary of deep learning auto-segmentation radiotherapy applications reported in the literature. Lastly, considerations for clinical deployment (commissioning and QA) of auto-segmentation software are provided.
Collapse
|
42
|
Zhang Z, Sejdić E. Radiological images and machine learning: Trends, perspectives, and prospects. Comput Biol Med 2019; 108:354-370. [PMID: 31054502 PMCID: PMC6531364 DOI: 10.1016/j.compbiomed.2019.02.017] [Citation(s) in RCA: 77] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Revised: 02/19/2019] [Accepted: 02/19/2019] [Indexed: 01/18/2023]
Abstract
The application of machine learning to radiological images is an increasingly active research area that is expected to grow in the next five to ten years. Recent advances in machine learning have the potential to recognize and classify complex patterns from different radiological imaging modalities such as x-rays, computed tomography, magnetic resonance imaging and positron emission tomography imaging. In many applications, machine learning based systems have shown comparable performance to human decision-making. The applications of machine learning are the key ingredients of future clinical decision making and monitoring systems. This review covers the fundamental concepts behind various machine learning techniques and their applications in several radiological imaging areas, such as medical image segmentation, brain function studies and neurological disease diagnosis, as well as computer-aided systems, image registration, and content-based image retrieval systems. Synchronistically, we will briefly discuss current challenges and future directions regarding the application of machine learning in radiological imaging. By giving insight on how take advantage of machine learning powered applications, we expect that clinicians can prevent and diagnose diseases more accurately and efficiently.
Collapse
Affiliation(s)
- Zhenwei Zhang
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA
| | - Ervin Sejdić
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA.
| |
Collapse
|
43
|
Agn M, Munck Af Rosenschöld P, Puonti O, Lundemann MJ, Mancini L, Papadaki A, Thust S, Ashburner J, Law I, Van Leemput K. A modality-adaptive method for segmenting brain tumors and organs-at-risk in radiation therapy planning. Med Image Anal 2019; 54:220-237. [PMID: 30952038 PMCID: PMC6554451 DOI: 10.1016/j.media.2019.03.005] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 03/14/2019] [Accepted: 03/21/2019] [Indexed: 12/25/2022]
Abstract
In this paper we present a method for simultaneously segmenting brain tumors and an extensive set of organs-at-risk for radiation therapy planning of glioblastomas. The method combines a contrast-adaptive generative model for whole-brain segmentation with a new spatial regularization model of tumor shape using convolutional restricted Boltzmann machines. We demonstrate experimentally that the method is able to adapt to image acquisitions that differ substantially from any available training data, ensuring its applicability across treatment sites; that its tumor segmentation accuracy is comparable to that of the current state of the art; and that it captures most organs-at-risk sufficiently well for radiation therapy planning purposes. The proposed method may be a valuable step towards automating the delineation of brain tumors and organs-at-risk in glioblastoma patients undergoing radiation therapy.
Collapse
Affiliation(s)
- Mikael Agn
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark.
| | - Per Munck Af Rosenschöld
- Radiation Physics, Department of Hematology, Oncology and Radiation Physics, Skåne University Hospital, Lund, Sweden
| | - Oula Puonti
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre, Denmark
| | - Michael J Lundemann
- Department of Oncology, Copenhagen University Hospital Rigshospitalet, Denmark
| | - Laura Mancini
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - Anastasia Papadaki
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - Steffi Thust
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - John Ashburner
- Wellcome Centre for Human Neuroimaging, UCL Institute of Neurology, University College London, UK
| | - Ian Law
- Department of Clinical Physiology, Nuclear Medicine and PET, Copenhagen University Hospital Rigshospitalet, Denmark
| | - Koen Van Leemput
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA
| |
Collapse
|
44
|
Ghribi O, Maalej A, Sellami L, Ben Slima M, Maalej MA, Ben Mahfoudh K, Dammak M, Mhiri C, Ben Hamida A. Advanced methodology for multiple sclerosis lesion exploring: Towards a computer aided diagnosis system. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2018.12.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
45
|
Damopoulos D, Lerch TD, Schmaranzer F, Tannast M, Chênes C, Zheng G, Schmid J. Segmentation of the proximal femur in radial MR scans using a random forest classifier and deformable model registration. Int J Comput Assist Radiol Surg 2019; 14:545-561. [PMID: 30604143 DOI: 10.1007/s11548-018-1899-z] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2018] [Accepted: 12/10/2018] [Indexed: 11/25/2022]
Abstract
BACKGROUND Radial 2D MRI scans of the hip are routinely used for the diagnosis of the cam type of femoroacetabular impingement (FAI) and of avascular necrosis (AVN) of the femoral head, both considered causes of hip joint osteoarthritis in young and active patients. A method for automated and accurate segmentation of the proximal femur from radial MRI scans could be very useful in both clinical routine and biomechanical studies. However, to our knowledge, no such method has been published before. PURPOSE The aims of this study are the development of a system for the segmentation of the proximal femur from radial MRI scans and the reconstruction of its 3D model that can be used for diagnosis and planning of hip-preserving surgery. METHODS The proposed system relies on: (a) a random forest classifier and (b) the registration of a 3D template mesh of the femur to the radial slices based on a physically based deformable model. The input to the system are the radial slices and the manually specified positions of three landmarks. Our dataset consists of the radial MRI scans of 25 patients symptomatic of FAI or AVN and accompanying manual segmentation of the femur, treated as the ground truth. RESULTS The achieved segmentation of the proximal femur has an average Dice similarity coefficient (DSC) of 96.37 ± 1.55%, an average symmetric mean absolute distance (SMAD) of 0.94 ± 0.39 mm and an average Hausdorff distance of 2.37 ± 1.14 mm. In the femoral head subregion, the average SMAD is 0.64 ± 0.18 mm and the average Hausdorff distance is 1.41 ± 0.56 mm. CONCLUSIONS We validated a semiautomated method for the segmentation of the proximal femur from radial MR scans. A 3D model of the proximal femur is also reconstructed, which can be used for the planning of hip-preserving surgery.
Collapse
Affiliation(s)
- Dimitrios Damopoulos
- Institute for Surgical Technology and Biomechanics, University of Bern, Stauffacherstrasse 78, 3014, Bern, Switzerland.
| | - Till Dominic Lerch
- Department of Orthopaedic Surgery and Traumatology, Inselspital, University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Florian Schmaranzer
- Department of Orthopaedic Surgery and Traumatology, Inselspital, University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Moritz Tannast
- Department of Orthopaedic Surgery and Traumatology, Inselspital, University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Christophe Chênes
- School of Health Sciences - Geneva, HES-SO University of Applied Sciences and Arts Western Switzerland, Avenue de Champel 47, 1206, Geneva, Switzerland
| | - Guoyan Zheng
- Institute for Surgical Technology and Biomechanics, University of Bern, Stauffacherstrasse 78, 3014, Bern, Switzerland.
| | - Jérôme Schmid
- School of Health Sciences - Geneva, HES-SO University of Applied Sciences and Arts Western Switzerland, Avenue de Champel 47, 1206, Geneva, Switzerland
| |
Collapse
|
46
|
Gros C, De Leener B, Badji A, Maranzano J, Eden D, Dupont SM, Talbott J, Zhuoquiong R, Liu Y, Granberg T, Ouellette R, Tachibana Y, Hori M, Kamiya K, Chougar L, Stawiarz L, Hillert J, Bannier E, Kerbrat A, Edan G, Labauge P, Callot V, Pelletier J, Audoin B, Rasoanandrianina H, Brisset JC, Valsasina P, Rocca MA, Filippi M, Bakshi R, Tauhid S, Prados F, Yiannakas M, Kearney H, Ciccarelli O, Smith S, Treaba CA, Mainero C, Lefeuvre J, Reich DS, Nair G, Auclair V, McLaren DG, Martin AR, Fehlings MG, Vahdat S, Khatibi A, Doyon J, Shepherd T, Charlson E, Narayanan S, Cohen-Adad J. Automatic segmentation of the spinal cord and intramedullary multiple sclerosis lesions with convolutional neural networks. Neuroimage 2019; 184:901-915. [PMID: 30300751 PMCID: PMC6759925 DOI: 10.1016/j.neuroimage.2018.09.081] [Citation(s) in RCA: 130] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Revised: 09/05/2018] [Accepted: 09/28/2018] [Indexed: 12/12/2022] Open
Abstract
The spinal cord is frequently affected by atrophy and/or lesions in multiple sclerosis (MS) patients. Segmentation of the spinal cord and lesions from MRI data provides measures of damage, which are key criteria for the diagnosis, prognosis, and longitudinal monitoring in MS. Automating this operation eliminates inter-rater variability and increases the efficiency of large-throughput analysis pipelines. Robust and reliable segmentation across multi-site spinal cord data is challenging because of the large variability related to acquisition parameters and image artifacts. In particular, a precise delineation of lesions is hindered by a broad heterogeneity of lesion contrast, size, location, and shape. The goal of this study was to develop a fully-automatic framework - robust to variability in both image parameters and clinical condition - for segmentation of the spinal cord and intramedullary MS lesions from conventional MRI data of MS and non-MS cases. Scans of 1042 subjects (459 healthy controls, 471 MS patients, and 112 with other spinal pathologies) were included in this multi-site study (n = 30). Data spanned three contrasts (T1-, T2-, and T2∗-weighted) for a total of 1943 vol and featured large heterogeneity in terms of resolution, orientation, coverage, and clinical conditions. The proposed cord and lesion automatic segmentation approach is based on a sequence of two Convolutional Neural Networks (CNNs). To deal with the very small proportion of spinal cord and/or lesion voxels compared to the rest of the volume, a first CNN with 2D dilated convolutions detects the spinal cord centerline, followed by a second CNN with 3D convolutions that segments the spinal cord and/or lesions. CNNs were trained independently with the Dice loss. When compared against manual segmentation, our CNN-based approach showed a median Dice of 95% vs. 88% for PropSeg (p ≤ 0.05), a state-of-the-art spinal cord segmentation method. Regarding lesion segmentation on MS data, our framework provided a Dice of 60%, a relative volume difference of -15%, and a lesion-wise detection sensitivity and precision of 83% and 77%, respectively. In this study, we introduce a robust method to segment the spinal cord and intramedullary MS lesions on a variety of MRI contrasts. The proposed framework is open-source and readily available in the Spinal Cord Toolbox.
Collapse
Affiliation(s)
- Charley Gros
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada
| | - Benjamin De Leener
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada
| | - Atef Badji
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada
- Department of Neuroscience, Faculty of Medicine, University of Montreal, Montreal, QC, Canada
| | - Josefina Maranzano
- McConnell Brain Imaging Centre, Montreal Neurological Institute, Montreal, Canada
| | - Dominique Eden
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada
| | - Sara M. Dupont
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada
- Department of Radiology and Biomedical Imaging, Zuckerberg San Francisco General Hospital, University of California, San Francisco, CA, USA
| | - Jason Talbott
- Department of Radiology and Biomedical Imaging, Zuckerberg San Francisco General Hospital, University of California, San Francisco, CA, USA
| | - Ren Zhuoquiong
- Department of Radiology, Xuanwu Hospital, Capital Medical University, Beijing 100053, P. R. China
| | - Yaou Liu
- Department of Radiology, Xuanwu Hospital, Capital Medical University, Beijing 100053, P. R. China
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100050, P. R. China
| | - Tobias Granberg
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, USA
| | - Russell Ouellette
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, USA
| | | | | | | | - Lydia Chougar
- Juntendo University Hospital, Tokyo, Japan
- Hospital Cochin, Paris, France
| | - Leszek Stawiarz
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Jan Hillert
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Elise Bannier
- CHU Rennes, Radiology Department
- Univ Rennes, Inria, CNRS, Inserm, IRISA UMR 6074, Visages U1128, France
| | - Anne Kerbrat
- Univ Rennes, Inria, CNRS, Inserm, IRISA UMR 6074, Visages U1128, France
- CHU Rennes, Neurology Department
| | - Gilles Edan
- Univ Rennes, Inria, CNRS, Inserm, IRISA UMR 6074, Visages U1128, France
- CHU Rennes, Neurology Department
| | - Pierre Labauge
- MS Unit. DPT of Neurology. University Hospital of Montpellier
| | - Virginie Callot
- Aix Marseille Univ, CNRS, CRMBM, Marseille, France
- APHM, CHU Timone, CEMEREM, Marseille, France
| | - Jean Pelletier
- APHM, CHU Timone, CEMEREM, Marseille, France
- APHM, Department of Neurology, CHU Timone, APHM, Marseille
| | - Bertrand Audoin
- APHM, CHU Timone, CEMEREM, Marseille, France
- APHM, Department of Neurology, CHU Timone, APHM, Marseille
| | | | - Jean-Christophe Brisset
- Observatoire Français de la Sclérose en Plaques (OFSEP) ; Univ Lyon, Université Claude Bernard Lyon 1 ; Hospices Civils de Lyon ; CREATIS-LRMN, UMR 5220 CNRS & U 1044 INSERM ; Lyon, France
| | - Paola Valsasina
- Neuroimaging Research Unit, INSPE, Division of Neuroscience, San Raffaele Scientific Institute, Vita-Salute San Raffaele University, Milan, Italy
| | - Maria A. Rocca
- Neuroimaging Research Unit, INSPE, Division of Neuroscience, San Raffaele Scientific Institute, Vita-Salute San Raffaele University, Milan, Italy
| | - Massimo Filippi
- Neuroimaging Research Unit, INSPE, Division of Neuroscience, San Raffaele Scientific Institute, Vita-Salute San Raffaele University, Milan, Italy
| | - Rohit Bakshi
- Brigham and Women’s Hospital, Harvard Medical School, Boston, USA
| | - Shahamat Tauhid
- Brigham and Women’s Hospital, Harvard Medical School, Boston, USA
| | - Ferran Prados
- Queen Square MS Centre, UCL Institute of Neurology, Faculty of Brain Sciences, University College London, London (UK)
- Center for Medical Image Computing (CMIC), Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Marios Yiannakas
- Queen Square MS Centre, UCL Institute of Neurology, Faculty of Brain Sciences, University College London, London (UK)
| | - Hugh Kearney
- Queen Square MS Centre, UCL Institute of Neurology, Faculty of Brain Sciences, University College London, London (UK)
| | - Olga Ciccarelli
- Queen Square MS Centre, UCL Institute of Neurology, Faculty of Brain Sciences, University College London, London (UK)
| | | | | | - Caterina Mainero
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, USA
| | - Jennifer Lefeuvre
- National Institute of Neurological Disorders and Stroke, National Institutes of Health, Maryland, USA
| | - Daniel S. Reich
- National Institute of Neurological Disorders and Stroke, National Institutes of Health, Maryland, USA
| | - Govind Nair
- National Institute of Neurological Disorders and Stroke, National Institutes of Health, Maryland, USA
| | | | | | - Allan R. Martin
- Division of Neurosurgery, Department of Surgery, University of Toronto, Toronto, ON, Canada
| | - Michael G. Fehlings
- Division of Neurosurgery, Department of Surgery, University of Toronto, Toronto, ON, Canada
| | - Shahabeddin Vahdat
- Functional Neuroimaging Unit, CRIUGM, Université de Montréal, Montreal, QC, Canada
- Neurology Department, Stanford University, US
| | - Ali Khatibi
- McConnell Brain Imaging Centre, Montreal Neurological Institute, Montreal, Canada
- Functional Neuroimaging Unit, CRIUGM, Université de Montréal, Montreal, QC, Canada
| | - Julien Doyon
- McConnell Brain Imaging Centre, Montreal Neurological Institute, Montreal, Canada
- Functional Neuroimaging Unit, CRIUGM, Université de Montréal, Montreal, QC, Canada
| | | | | | - Sridar Narayanan
- McConnell Brain Imaging Centre, Montreal Neurological Institute, Montreal, Canada
| | - Julien Cohen-Adad
- NeuroPoly Lab, Institute of Biomedical Engineering, Polytechnique Montreal, Montreal, QC, Canada
- Functional Neuroimaging Unit, CRIUGM, Université de Montréal, Montreal, QC, Canada
| |
Collapse
|
47
|
Van Opbroek A, Achterberg HC, Vernooij MW, De Bruijne M. Transfer Learning for Image Segmentation by Combining Image Weighting and Kernel Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:213-224. [PMID: 30047874 DOI: 10.1109/tmi.2018.2859478] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Many medical image segmentation methods are based on the supervised classification of voxels. Such methods generally perform well when provided with a training set that is representative of the test images to the segment. However, problems may arise when training and test data follow different distributions, for example, due to differences in scanners, scanning protocols, or patient groups. Under such conditions, weighting training images according to distribution similarity have been shown to greatly improve performance. However, this assumes that a part of the training data is representative of the test data; it does not make unrepresentative data more similar. We, therefore, investigate kernel learning as a way to reduce differences between training and test data and explore the added value of kernel learning for image weighting. We also propose a new image weighting method that minimizes maximum mean discrepancy (MMD) between training and test data, which enables the joint optimization of image weights and kernel. Experiments on brain tissue, white matter lesion, and hippocampus segmentation show that both kernel learning and image weighting, when used separately, greatly improve performance on heterogeneous data. Here, MMD weighting obtains similar performance to previously proposed image weighting methods. Combining image weighting and kernel learning, optimized either individually or jointly, can give a small additional improvement in performance.
Collapse
|
48
|
La Rosa F, Fartaria MJ, Kober T, Richiardi J, Granziera C, Thiran JP, Cuadra MB. Shallow vs Deep Learning Architectures for White Matter Lesion Segmentation in the Early Stages of Multiple Sclerosis. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES 2019. [DOI: 10.1007/978-3-030-11723-8_14] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
49
|
Hashemi SR, Salehi SSM, Erdogmus D, Prabhu SP, Warfield SK, Gholipour A. Asymmetric Loss Functions and Deep Densely Connected Networks for Highly Imbalanced Medical Image Segmentation: Application to Multiple Sclerosis Lesion Detection. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2018; 7:721-1735. [PMID: 31528523 PMCID: PMC6746414 DOI: 10.1109/access.2018.2886371] [Citation(s) in RCA: 62] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Fully convolutional deep neural networks have been asserted to be fast and precise frameworks with great potential in image segmentation. One of the major challenges in training such networks raises when data is unbalanced, which is common in many medical imaging applications such as lesion segmentation where lesion class voxels are often much lower in numbers than non-lesion voxels. A trained network with unbalanced data may make predictions with high precision and low recall, being severely biased towards the non-lesion class which is particularly undesired in most medical applications where false negatives are actually more important than false positives. Various methods have been proposed to address this problem including two step training, sample re-weighting, balanced sampling, and more recently similarity loss functions, and focal loss. In this work we trained fully convolutional deep neural networks using an asymmetric similarity loss function to mitigate the issue of data imbalance and achieve much better trade-off between precision and recall. To this end, we developed a 3D fully convolutional densely connected network (FC-DenseNet) with large overlapping image patches as input and an asymmetric similarity loss layer based on Tversky index (using F β scores). We used large overlapping image patches as inputs for intrinsic and extrinsic data augmentation, a patch selection algorithm, and a patch prediction fusion strategy using B-spline weighted soft voting to account for the uncertainty of prediction in patch borders. We applied this method to multiple sclerosis (MS) lesion segmentation based on two different datasets of MSSEG 2016 and ISBI longitudinal MS lesion segmentation challenge, where we achieved average Dice similarity coefficients of 69.9% and 65.74%, respectively, achieving top performance in both challenges. We compared the performance of our network trained with F β loss, focal loss, and generalized Dice loss (GDL) functions. Through September 2018 our network trained with focal loss ranked first according to the ISBI challenge overall score and resulted in the lowest reported lesion false positive rate among all submitted methods. Our network trained with the asymmetric similarity loss led to the lowest surface distance and the best lesion true positive rate that is arguably the most important performance metric in a clinical decision support system for lesion detection. The asymmetric similarity loss function based on F β scores allows training networks that make a better balance between precision and recall in highly unbalanced image segmentation. We achieved superior performance in MS lesion segmentation using a patchwise 3D FC-DenseNet with a patch prediction fusion strategy, trained with asymmetric similarity loss functions.
Collapse
Affiliation(s)
- Seyed Raein Hashemi
- Computational Radiology Laboratory, Boston Children's Hospital, and Harvard Medical School, Boston MA 02115
- Computer and Information Science Department, Northeastern University, Boston, MA, 02115
| | - Seyed Sadegh Mohseni Salehi
- Computational Radiology Laboratory, Boston Children's Hospital, and Harvard Medical School, Boston MA 02115
- Electrical and Computer Engineering Department, Northeastern University, Boston, MA, 02115
| | - Deniz Erdogmus
- Electrical and Computer Engineering Department, Northeastern University, Boston, MA, 02115
| | - Sanjay P Prabhu
- Computational Radiology Laboratory, Boston Children's Hospital, and Harvard Medical School, Boston MA 02115
| | - Simon K Warfield
- Computational Radiology Laboratory, Boston Children's Hospital, and Harvard Medical School, Boston MA 02115
| | - Ali Gholipour
- Computational Radiology Laboratory, Boston Children's Hospital, and Harvard Medical School, Boston MA 02115
| |
Collapse
|
50
|
Li H, Jiang G, Zhang J, Wang R, Wang Z, Zheng WS, Menze B. Fully convolutional network ensembles for white matter hyperintensities segmentation in MR images. Neuroimage 2018; 183:650-665. [DOI: 10.1016/j.neuroimage.2018.07.005] [Citation(s) in RCA: 113] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2017] [Revised: 06/30/2018] [Accepted: 07/02/2018] [Indexed: 11/30/2022] Open
|