101
|
Gong Z, Kan L. Segmentation and classification of renal tumors based on convolutional neural network. JOURNAL OF RADIATION RESEARCH AND APPLIED SCIENCES 2021. [DOI: 10.1080/16878507.2021.1984150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Zheng Gong
- Department of Urinary Surgery, Shengjing Hospital of China Medical University, Shenyang, China
| | - Liang Kan
- Department of Geriatrics, Shengjing Hospital of China Medical University, Shenyang, China
| |
Collapse
|
102
|
A comparison of automated atrophy measures across the frontotemporal dementia spectrum: Implications for trials. NEUROIMAGE-CLINICAL 2021; 32:102842. [PMID: 34626889 PMCID: PMC8503665 DOI: 10.1016/j.nicl.2021.102842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 08/13/2021] [Accepted: 09/23/2021] [Indexed: 11/22/2022]
Abstract
Background Frontotemporal dementia (FTD) is a common cause of young onset dementia, and whilst there are currently no treatments, there are several promising candidates in development and early phase trials. Comprehensive investigations of neuroimaging markers of disease progression across the full spectrum of FTD disorders are lacking and urgently needed to facilitate these trials. Objective To investigate the comparative performance of multiple automated segmentation and registration pipelines used to quantify longitudinal whole-brain atrophy across the clinical, genetic and pathological subgroups of FTD, in order to inform upcoming trials about suitable neuroimaging-based endpoints. Methods Seventeen fully automated techniques for extracting whole-brain atrophy measures were applied and directly compared in a cohort of 226 participants who had undergone longitudinal structural 3D T1-weighted imaging. Clinical diagnoses were behavioural variant FTD (n = 56) and primary progressive aphasia (PPA, n = 104), comprising semantic variant PPA (n = 38), non-fluent variant PPA (n = 42), logopenic variant PPA (n = 18), and PPA-not otherwise specified (n = 6). 49 of these patients had either a known pathogenic mutation or postmortem confirmation of their underlying pathology. 66 healthy controls were included for comparison. Sample size estimates to detect a 30% reduction in atrophy (80% power; 0.05 significance) were computed to explore the relative feasibility of these brain measures as surrogate markers of disease progression and their ability to detect putative disease-modifying treatment effects. Results Multiple automated techniques showed great promise, detecting significantly increased rates of whole-brain atrophy (p<0.001) and requiring sample sizes of substantially less than 100 patients per treatment arm. Across the different FTD subgroups, direct measures of volume change consistently outperformed their indirect counterparts, irrespective of the initial segmentation quality. Significant differences in performance were found between both techniques and patient subgroups, highlighting the importance of informed biomarker choice based on the patient population of interest. Conclusion This work expands current knowledge and builds on the limited longitudinal investigations currently available in FTD, as well as providing valuable information about the potential of fully automated neuroimaging biomarkers for sporadic and genetic FTD trials.
Collapse
|
103
|
Lu Y, Zheng K, Li W, Wang Y, Harrison AP, Lin C, Wang S, Xiao J, Lu L, Kuo CF, Miao S. Contour Transformer Network for One-Shot Segmentation of Anatomical Structures. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2672-2684. [PMID: 33290215 DOI: 10.1109/tmi.2020.3043375] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Accurate segmentation of anatomical structures is vital for medical image analysis. The state-of-the-art accuracy is typically achieved by supervised learning methods, where gathering the requisite expert-labeled image annotations in a scalable manner remains a main obstacle. Therefore, annotation-efficient methods that permit to produce accurate anatomical structure segmentation are highly desirable. In this work, we present Contour Transformer Network (CTN), a one-shot anatomy segmentation method with a naturally built-in human-in-the-loop mechanism. We formulate anatomy segmentation as a contour evolution process and model the evolution behavior by graph convolutional networks (GCNs). Training the CTN model requires only one labeled image exemplar and leverages additional unlabeled data through newly introduced loss functions that measure the global shape and appearance consistency of contours. On segmentation tasks of four different anatomies, we demonstrate that our one-shot learning method significantly outperforms non-learning-based methods and performs competitively to the state-of-the-art fully supervised deep learning methods. With minimal human-in-the-loop editing feedback, the segmentation performance can be further improved to surpass the fully supervised methods.
Collapse
|
104
|
A Segmentation Method of Foramen Ovale Based on Multiatlas. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:5221111. [PMID: 34589137 PMCID: PMC8476260 DOI: 10.1155/2021/5221111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 08/25/2021] [Indexed: 11/17/2022]
Abstract
Trigeminal neuralgia is a neurological disease. It is often treated by puncturing the trigeminal nerve through the skin and the oval foramen of the skull to selectively destroy the pain nerve. The process of puncture operation is difficult because the morphology of the foramen ovale in the skull base is varied and the surrounding anatomical structure is complex. Computer-aided puncture guidance technology is extremely valuable for the treatment of trigeminal neuralgia. Computer-aided guidance can help doctors determine the puncture target by accurately locating the foramen ovale in the skull base. Foramen ovale segmentation is a prerequisite for locating but is a tedious and error-prone task if done manually. In this paper, we present an image segmentation solution based on the multiatlas method that automatically segments the foramen ovale. We developed a data set of 30 CT scans containing 20 foramen ovale atlas and 10 CT scans for testing. Our approach can perform foramen ovale segmentation in puncture operation scenarios based solely on limited data. We propose to utilize this method as an enabler in clinical work.
Collapse
|
105
|
Saito A, Wakabayashi H, Daisaki H, Yoshida A, Higashiyama S, Kawabe J, Shimizu A. Extraction of metastasis hotspots in a whole-body bone scintigram based on bilateral asymmetry. Int J Comput Assist Radiol Surg 2021; 16:2251-2260. [PMID: 34478048 DOI: 10.1007/s11548-021-02488-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 08/24/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE A hotspot of bone metastatic lesion in a whole-body bone scintigram is often observed as left-right asymmetry. The purpose of this study is to present a network to evaluate bilateral difference of a whole-body bone scintigram, and to subsequently integrate it with our previous network that extracts the hotspot from a pair of anterior and posterior images. METHODS Input of the proposed network is a pair of scintigrams that are the original one and the flipped version with respect to body axis. The paired scintigrams are processed by a butterfly-type network (BtrflyNet). Subsequently, the output of the network is combined with the output of another BtrflyNet for a pair of anterior and posterior scintigrams by employing a convolutional layer optimized using training images. RESULTS We evaluated the performance of the combined networks, which comprised two BtrflyNets followed by a convolutional layer for integration, in terms of accuracy of hotspot extraction using 1330 bone scintigrams of 665 patients with prostate cancer. A threefold cross-validation experiment showed that the number of false positive regions was reduced from 4.30 to 2.13 for anterior and 4.71 to 2.62 for posterior scintigrams on average compared with our previous model. CONCLUSIONS This study presented a network for hotspot extraction of bone metastatic lesion that evaluates bilateral difference of a whole-body bone scintigram. When combining the network with the previous network that extracts the hotspot from a pair of anterior and posterior scintigrams, the false positives were reduced by nearly half compared to our previous model.
Collapse
Affiliation(s)
- Atsushi Saito
- Institute of Engineering, Tokyo University of Agriculture and Technology, Koganei, Tokyo, Japan
| | - Hayato Wakabayashi
- Institute of Engineering, Tokyo University of Agriculture and Technology, Koganei, Tokyo, Japan
| | - Hiromitsu Daisaki
- Department of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Gunma, Japan
| | - Atsushi Yoshida
- Department of Nuclear Medicine, Graduate School of Medicine, Osaka City University, Abeno-ku, Osaka, Japan
| | - Shigeaki Higashiyama
- Department of Nuclear Medicine, Graduate School of Medicine, Osaka City University, Abeno-ku, Osaka, Japan
| | - Joji Kawabe
- Department of Nuclear Medicine, Graduate School of Medicine, Osaka City University, Abeno-ku, Osaka, Japan
| | - Akinobu Shimizu
- Institute of Engineering, Tokyo University of Agriculture and Technology, Koganei, Tokyo, Japan.
| |
Collapse
|
106
|
Casamitjana A, Mancini M, Iglesias JE. Synth-by-Reg (SbR): Contrastive learning for synthesis-based registration of paired images. SIMULATION AND SYNTHESIS IN MEDICAL IMAGING : ... INTERNATIONAL WORKSHOP, SASHIMI ..., HELD IN CONJUNCTION WITH MICCAI ..., PROCEEDINGS. SASHIMI (WORKSHOP) 2021; 12965:44-54. [PMID: 34778892 PMCID: PMC8582976 DOI: 10.1007/978-3-030-87592-3_5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Nonlinear inter-modality registration is often challenging due to the lack of objective functions that are good proxies for alignment. Here we propose a synthesis-by-registration method to convert this problem into an easier intra-modality task. We introduce a registration loss for weakly supervised image translation between domains that does not require perfectly aligned training data. This loss capitalises on a registration U-Net with frozen weights, to drive a synthesis CNN towards the desired translation. We complement this loss with a structure preserving constraint based on contrastive learning, which prevents blurring and content shifts due to overfitting. We apply this method to the registration of histological sections to MRI slices, a key step in 3D histology reconstruction. Results on two public datasets show improvements over registration based on mutual information (13% reduction in landmark error) and synthesis-based algorithms such as CycleGAN (11% reduction), and are comparable to registration with label supervision. Code and data are publicly available at https://github.com/acasamitjana/SynthByReg.
Collapse
Affiliation(s)
| | - Matteo Mancini
- Department of Neuroscience, University of Sussex, Brighton, UK
- NeuroPoly Lab, Polytechnique Montreal, Canada
- CUBRIC, Cardiff University, UK
| | - Juan Eugenio Iglesias
- Center for Medical Image Computing, University College London, UK
- Martinos Center for Biomedical Imaging, MGH and Harvard Medical School, USA
- Computer Science and AI Laboratory, Massachusetts Institute of Technology, USA
| |
Collapse
|
107
|
Plassard AJ, Bao S, McHugo M, Beason-Held L, Blackford JU, Heckers S, Landman BA. Automated, open-source segmentation of the Hippocampus and amygdala with the open Vanderbilt archive of the temporal lobe. Magn Reson Imaging 2021; 81:17-23. [PMID: 33901584 PMCID: PMC8715642 DOI: 10.1016/j.mri.2021.04.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 04/14/2021] [Accepted: 04/21/2021] [Indexed: 11/30/2022]
Abstract
Examining volumetric differences of the amygdala and anterior-posterior regions of the hippocampus is important for understanding cognition and clinical disorders. However, the gold standard manual segmentation of these structures is time and labor-intensive. Automated, accurate, and reproducible techniques to segment the hippocampus and amygdala are desirable. Here, we present a hierarchical approach to multi-atlas segmentation of the hippocampus head, body and tail and the amygdala based on atlases from 195 individuals. The Open Vanderbilt Archive of the temporal Lobe (OVAL) segmentation technique outperforms the commonly used FreeSurfer, FSL FIRST, and whole-brain multi-atlas segmentation approaches for the full hippocampus and amygdala and nears or exceeds inter-rater reproducibility for segmentation of the hippocampus head, body and tail. OVAL has been released in open-source and is freely available.
Collapse
Affiliation(s)
- Andrew J Plassard
- Vanderbilt University, Computer Science, 2301 Vanderbilt Place, Nashville, TN 37235, USA.
| | - Shunxing Bao
- Vanderbilt University, Computer Science, 2301 Vanderbilt Place, Nashville, TN 37235, USA.
| | - Maureen McHugo
- Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, 1601 23rd Avenue South, Nashville, TN 37212, USA.
| | - Lori Beason-Held
- Laboratory of Behavioral Neuroscience, National Institute on Aging, NIH, 31 Center Dr, #5C27 MSC 2292, Building 31, Room 5C27, Bethesda, Maryland, 20892-0001, USA.
| | - Jennifer U Blackford
- Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, 1601 23rd Avenue South, Nashville, TN 37212, USA.
| | - Stephan Heckers
- Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, 1601 23rd Avenue South, Nashville, TN 37212, USA.
| | - Bennett A Landman
- Vanderbilt University, Computer Science, 2301 Vanderbilt Place, Nashville, TN 37235, USA; Vanderbilt University, Electrical Engineering, 2301 Vanderbilt Place, Nashville, TN 37235, USA.
| |
Collapse
|
108
|
Ananda A, Ngan KH, Karabağ C, Ter-Sarkisov A, Alonso E, Reyes-Aldasoro CC. Classification and Visualisation of Normal and Abnormal Radiographs; A Comparison between Eleven Convolutional Neural Network Architectures. SENSORS (BASEL, SWITZERLAND) 2021; 21:5381. [PMID: 34450821 PMCID: PMC8400172 DOI: 10.3390/s21165381] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 07/20/2021] [Accepted: 08/01/2021] [Indexed: 02/03/2023]
Abstract
This paper investigates the classification of radiographic images with eleven convolutional neural network (CNN) architectures (GoogleNet, VGG-19, AlexNet, SqueezeNet, ResNet-18, Inception-v3, ResNet-50, VGG-16, ResNet-101, DenseNet-201 and Inception-ResNet-v2). The CNNs were used to classify a series of wrist radiographs from the Stanford Musculoskeletal Radiographs (MURA) dataset into two classes-normal and abnormal. The architectures were compared for different hyper-parameters against accuracy and Cohen's kappa coefficient. The best two results were then explored with data augmentation. Without the use of augmentation, the best results were provided by Inception-ResNet-v2 (Mean accuracy = 0.723, Mean kappa = 0.506). These were significantly improved with augmentation to Inception-ResNet-v2 (Mean accuracy = 0.857, Mean kappa = 0.703). Finally, Class Activation Mapping was applied to interpret activation of the network against the location of an anomaly in the radiographs.
Collapse
Affiliation(s)
- Ananda Ananda
- giCentre, Department of Computer Science, School of Mathematics, Computer Science and Engineering, City, University of London, London EC1V 0HB, UK; (K.H.N.); (C.K.)
| | - Kwun Ho Ngan
- giCentre, Department of Computer Science, School of Mathematics, Computer Science and Engineering, City, University of London, London EC1V 0HB, UK; (K.H.N.); (C.K.)
| | - Cefa Karabağ
- giCentre, Department of Computer Science, School of Mathematics, Computer Science and Engineering, City, University of London, London EC1V 0HB, UK; (K.H.N.); (C.K.)
| | - Aram Ter-Sarkisov
- CitAI Research Centre, Department of Computer Science, School of Mathematics, Computer Science and Engineering, City, University of London, London EC1V 0HB, UK; (A.T.-S.); (E.A.)
| | - Eduardo Alonso
- CitAI Research Centre, Department of Computer Science, School of Mathematics, Computer Science and Engineering, City, University of London, London EC1V 0HB, UK; (A.T.-S.); (E.A.)
| | - Constantino Carlos Reyes-Aldasoro
- giCentre, Department of Computer Science, School of Mathematics, Computer Science and Engineering, City, University of London, London EC1V 0HB, UK; (K.H.N.); (C.K.)
| |
Collapse
|
109
|
Yan L, Liu D, Xiang Q, Luo Y, Wang T, Wu D, Chen H, Zhang Y, Li Q. PSP net-based automatic segmentation network model for prostate magnetic resonance imaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 207:106211. [PMID: 34134076 DOI: 10.1016/j.cmpb.2021.106211] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 05/24/2021] [Indexed: 06/12/2023]
Abstract
PURPOSE Prostate cancer is a common cancer. To improve the accuracy of early diagnosis, we propose a prostate Magnetic Resonance Imaging (MRI) segmentation model based on Pyramid Scene Parsing Network (PSP Net). METHOD A total of 270 prostate MRI images were collected, and the data set was divided. Contrast limited adaptive histogram equalization (CLAHE) was enhanced in this study. We use the prostate MRI segmentation model based on PSP net, and use segmentation accuracy, under segmentation rate, over segmentation rate and receiver operating characteristic (ROC) curve evaluation index to compare the segmentation effect based on FCN and U-Net. RESULTS PSP net has the highest segmentation accuracy of 0.9865, over segmentation rate of 0.0023, under segmentation rate of 0.1111, which is less than FCN and U-Net. The ROC curve of PSP net is closest to the upper left corner, AUC is 0.9427, larger than FCN and U-Net. CONCLUSION This paper proves through a large number of experimental results that the prostate MRI automatic segmentation network model based on PSP Net is able to improve the accuracy of segmentation, relieve the workload of doctors, and is worthy of further clinical promotion.
Collapse
Affiliation(s)
- Lingfei Yan
- Department of Urology, the Fifth Affiliated Hospital of Southern Medical University, Guangzhou, Guangdong 11100, China.
| | - Dawei Liu
- Department of Urology, the Fifth Affiliated Hospital of Southern Medical University, Guangzhou, Guangdong 11100, China
| | - Qi Xiang
- Department of Urology, the Fifth Affiliated Hospital of Southern Medical University, Guangzhou, Guangdong 11100, China
| | - Yang Luo
- Department of Urology, the Fifth Affiliated Hospital of Southern Medical University, Guangzhou, Guangdong 11100, China
| | - Tao Wang
- Department of Urology, the Fifth Affiliated Hospital of Southern Medical University, Guangzhou, Guangdong 11100, China
| | - Dali Wu
- Department of Urology, the Fifth Affiliated Hospital of Southern Medical University, Guangzhou, Guangdong 11100, China
| | - Haiping Chen
- Department of Urology, the Fifth Affiliated Hospital of Southern Medical University, Guangzhou, Guangdong 11100, China
| | - Yu Zhang
- Department of Urology, the Fifth Affiliated Hospital of Southern Medical University, Guangzhou, Guangdong 11100, China
| | - Qing Li
- Department of Urology, the Fifth Affiliated Hospital of Southern Medical University, Guangzhou, Guangdong 11100, China
| |
Collapse
|
110
|
Spoor DS, Sijtsema NM, van den Bogaard VAB, van der Schaaf A, Brouwer CL, Ta BDP, Vliegenthart R, Kierkels RGJ, Langendijk JA, Maduro JH, Peters FBJ, Crijns APG. Validation of separate multi-atlases for auto segmentation of cardiac substructures in CT-scans acquired in deep inspiration breath hold and free breathing. Radiother Oncol 2021; 163:46-54. [PMID: 34343547 DOI: 10.1016/j.radonc.2021.07.025] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 07/23/2021] [Accepted: 07/24/2021] [Indexed: 12/25/2022]
Abstract
BACKGROUND AND PURPOSE Developing NTCP-models for cardiac complications after breast cancer (BC) radiotherapy requires cardiac dose-volume parameters for many patients. These can be obtained by using multi-atlas based automatic segmentation (MABAS) of cardiac structures in planning CT scans. We investigated the relevance of separate multi-atlases for deep inspiration breath hold (DIBH) and free breathing (FB) CT scans. MATERIALS AND METHODS BC patients scanned in DIBH (n = 10) and in FB (n = 20) were selected to create separate multi-atlases consisting of expert panel delineations of the whole heart, atria and ventricles. The accuracy of atlas-generated contours was validated with expert delineations in independent datasets (n = 10 for DIBH and FB) and reported as Dice coefficients, contour distances and dose-volume differences in relation to interobserver variability of manual contours. Dependency of MABAS contouring accuracy on breathing technique was assessed by validation of a FB atlas in DIBH patients and vice versa (cross-validation). RESULTS For all structures the FB and DIBH atlases resulted in Dice coefficients with their respective reference contours ≥ 0.8 and average contour distances ≤ 2 mm smaller than slice thickness of (CTs). No significant differences were found for dose-volume parameters in volumes receiving relevant dose levels (WH, LV and RV). Accuracy of the DIBH atlas was at least similar to, and for the ventricles better than, the interobserver variation in manual delineation. Cross-validation between breathing techniques showed a reduced MABAS performance. CONCLUSION Multi-atlas accuracy was at least similar to interobserver delineation variation. Separate atlases for scans made in DIBH and FB could benefit atlas performance because accuracy depends on breathing technique.
Collapse
Affiliation(s)
- Daan S Spoor
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, The Netherlands
| | - Nanna M Sijtsema
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, The Netherlands.
| | - Veerle A B van den Bogaard
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, The Netherlands
| | - Arjen van der Schaaf
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, The Netherlands
| | - Charlotte L Brouwer
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, The Netherlands
| | - Bastiaan D P Ta
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, The Netherlands
| | - Rozemarijn Vliegenthart
- Department of Radiology, University of Groningen, University Medical Center Groningen, The Netherlands
| | - Roel G J Kierkels
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, The Netherlands
| | - Johannes A Langendijk
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, The Netherlands
| | - John H Maduro
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, The Netherlands
| | - Femke B J Peters
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, The Netherlands
| | - Anne P G Crijns
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, The Netherlands
| |
Collapse
|
111
|
Fully Automatic Adaptive Meshing Based Segmentation of the Ventricular System for Augmented Reality Visualization and Navigation. World Neurosurg 2021; 156:e9-e24. [PMID: 34333157 DOI: 10.1016/j.wneu.2021.07.099] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 07/19/2021] [Accepted: 07/21/2021] [Indexed: 11/22/2022]
Abstract
OBJECTIVE Effective image segmentation of cerebral structures is fundamental to 3-dimensional techniques such as augmented reality. To be clinically viable, segmentation algorithms should be fully automatic and easily integrated in existing digital infrastructure. We created a fully automatic adaptive-meshing-based segmentation system for T1-weighted magnetic resonance images (MRI) to automatically segment the complete ventricular system, running in a cloud-based environment that can be accessed on an augmented reality device. This study aims to assess the accuracy and segmentation time of the system by comparing it to a manually segmented ground truth dataset. METHODS A ground truth (GT) dataset of 46 contrast-enhanced and non-contrast-enhanced T1-weighted MRI scans was manually segmented. These scans also were uploaded to our system to create a machine-segmented (MS) dataset. The GT data were compared with the MS data using the Sørensen-Dice similarity coefficient and 95% Hausdorff distance to determine segmentation accuracy. Furthermore, segmentation times for all GT and MS segmentations were measured. RESULTS Automatic segmentation was successful for 45 (98%) of 46 cases. Mean Sørensen-Dice similarity coefficient score was 0.83 (standard deviation [SD] = 0.08) and mean 95% Hausdorff distance was 19.06 mm (SD = 11.20). Segmentation time was significantly longer for the GT group (mean = 14405 seconds, SD = 7089) when compared with the MS group (mean = 1275 seconds, SD = 714) with a mean difference of 13,130 seconds (95% confidence interval 10,130-16,130). CONCLUSIONS The described adaptive meshing-based segmentation algorithm provides accurate and time-efficient automatic segmentation of the ventricular system from T1 MRI scans and direct visualization of the rendered surface models in augmented reality.
Collapse
|
112
|
Gordon S, Kodner B, Goldfryd T, Sidorov M, Goldberger J, Raviv TR. An atlas of classifiers-a machine learning paradigm for brain MRI segmentation. Med Biol Eng Comput 2021; 59:1833-1849. [PMID: 34313921 DOI: 10.1007/s11517-021-02414-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 04/21/2021] [Indexed: 11/25/2022]
Abstract
We present the Atlas of Classifiers (AoC)-a conceptually novel framework for brain MRI segmentation. The AoC is a spatial map of voxel-wise multinomial logistic regression (LR) functions learned from the labeled data. Upon convergence, the resulting fixed LR weights, a few for each voxel, represent the training dataset. It can, therefore, be considered as a light-weight learning machine, which despite its low capacity does not underfit the problem. The AoC construction is independent of the actual intensities of the test images, providing the flexibility to train it on the available labeled data and use it for the segmentation of images from different datasets and modalities. In this sense, it does not overfit the training data, as well. The proposed method has been applied to numerous publicly available datasets for the segmentation of brain MRI tissues and is shown to be robust to noise and outreach commonly used methods. Promising results were also obtained for multi-modal, cross-modality MRI segmentation. Finally, we show how AoC trained on brain MRIs of healthy subjects can be exploited for lesion segmentation of multiple sclerosis patients.
Collapse
Affiliation(s)
- Shiri Gordon
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Boris Kodner
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Tal Goldfryd
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Michael Sidorov
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Jacob Goldberger
- The Faculty of Electrical Engineering, Ber-Ilan University, Ramat-Gan, Israel
| | - Tammy Riklin Raviv
- The School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel.
| |
Collapse
|
113
|
Yan Q, Wang B, Zhang W, Luo C, Xu W, Xu Z, Zhang Y, Shi Q, Zhang L, You Z. Attention-Guided Deep Neural Network With Multi-Scale Feature Fusion for Liver Vessel Segmentation. IEEE J Biomed Health Inform 2021; 25:2629-2642. [PMID: 33264097 DOI: 10.1109/jbhi.2020.3042069] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Liver vessel segmentation is fast becoming a key instrument in the diagnosis and surgical planning of liver diseases. In clinical practice, liver vessels are normally manual annotated by clinicians on each slice of CT images, which is extremely laborious. Several deep learning methods exist for liver vessel segmentation, however, promoting the performance of segmentation remains a major challenge due to the large variations and complex structure of liver vessels. Previous methods mainly using existing UNet architecture, but not all features of the encoder are useful for segmentation and some even cause interferences. To overcome this problem, we propose a novel deep neural network for liver vessel segmentation, called LVSNet, which employs special designs to obtain the accurate structure of the liver vessel. Specifically, we design Attention-Guided Concatenation (AGC) module to adaptively select the useful context features from low-level features guided by high-level features. The proposed AGC module focuses on capturing rich complemented information to obtain more details. In addition, we introduce an innovative multi-scale fusion block by constructing hierarchical residual-like connections within one single residual block, which is of great importance for effectively linking the local blood vessel fragments together. Furthermore, we construct a new dataset containing 40 thin thickness cases (0.625 mm) which consist of CT volumes and annotated vessels. To evaluate the effectiveness of the method with minor vessels, we also propose an automatic stratification method to split major and minor liver vessels. Extensive experimental results demonstrate that the proposed LVSNet outperforms previous methods on liver vessel segmentation datasets. Additionally, we conduct a series of ablation studies that comprehensively support the superiority of the underlying concepts.
Collapse
|
114
|
Liu X, Li KW, Yang R, Geng LS. Review of Deep Learning Based Automatic Segmentation for Lung Cancer Radiotherapy. Front Oncol 2021; 11:717039. [PMID: 34336704 PMCID: PMC8323481 DOI: 10.3389/fonc.2021.717039] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Accepted: 06/21/2021] [Indexed: 12/14/2022] Open
Abstract
Lung cancer is the leading cause of cancer-related mortality for males and females. Radiation therapy (RT) is one of the primary treatment modalities for lung cancer. While delivering the prescribed dose to tumor targets, it is essential to spare the tissues near the targets-the so-called organs-at-risk (OARs). An optimal RT planning benefits from the accurate segmentation of the gross tumor volume and surrounding OARs. Manual segmentation is a time-consuming and tedious task for radiation oncologists. Therefore, it is crucial to develop automatic image segmentation to relieve radiation oncologists of the tedious contouring work. Currently, the atlas-based automatic segmentation technique is commonly used in clinical routines. However, this technique depends heavily on the similarity between the atlas and the image segmented. With significant advances made in computer vision, deep learning as a part of artificial intelligence attracts increasing attention in medical image automatic segmentation. In this article, we reviewed deep learning based automatic segmentation techniques related to lung cancer and compared them with the atlas-based automatic segmentation technique. At present, the auto-segmentation of OARs with relatively large volume such as lung and heart etc. outperforms the organs with small volume such as esophagus. The average Dice similarity coefficient (DSC) of lung, heart and liver are over 0.9, and the best DSC of spinal cord reaches 0.9. However, the DSC of esophagus ranges between 0.71 and 0.87 with a ragged performance. In terms of the gross tumor volume, the average DSC is below 0.8. Although deep learning based automatic segmentation techniques indicate significant superiority in many aspects compared to manual segmentation, various issues still need to be solved. We discussed the potential issues in deep learning based automatic segmentation including low contrast, dataset size, consensus guidelines, and network design. Clinical limitations and future research directions of deep learning based automatic segmentation were discussed as well.
Collapse
Affiliation(s)
- Xi Liu
- School of Physics, Beihang University, Beijing, China
| | - Kai-Wen Li
- School of Physics, Beihang University, Beijing, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology, Beihang University, Beijing, China
| | - Ruijie Yang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Li-Sheng Geng
- School of Physics, Beihang University, Beijing, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Medicine and Engineering, Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology, Beihang University, Beijing, China
- Beijing Key Laboratory of Advanced Nuclear Materials and Physics, Beihang University, Beijing, China
- School of Physics and Microelectronics, Zhengzhou University, Zhengzhou, China
| |
Collapse
|
115
|
Payette K, de Dumast P, Kebiri H, Ezhov I, Paetzold JC, Shit S, Iqbal A, Khan R, Kottke R, Grehten P, Ji H, Lanczi L, Nagy M, Beresova M, Nguyen TD, Natalucci G, Karayannis T, Menze B, Bach Cuadra M, Jakab A. An automatic multi-tissue human fetal brain segmentation benchmark using the Fetal Tissue Annotation Dataset. Sci Data 2021; 8:167. [PMID: 34230489 PMCID: PMC8260784 DOI: 10.1038/s41597-021-00946-3] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Accepted: 05/13/2021] [Indexed: 11/09/2022] Open
Abstract
It is critical to quantitatively analyse the developing human fetal brain in order to fully understand neurodevelopment in both normal fetuses and those with congenital disorders. To facilitate this analysis, automatic multi-tissue fetal brain segmentation algorithms are needed, which in turn requires open datasets of segmented fetal brains. Here we introduce a publicly available dataset of 50 manually segmented pathological and non-pathological fetal magnetic resonance brain volume reconstructions across a range of gestational ages (20 to 33 weeks) into 7 different tissue categories (external cerebrospinal fluid, grey matter, white matter, ventricles, cerebellum, deep grey matter, brainstem/spinal cord). In addition, we quantitatively evaluate the accuracy of several automatic multi-tissue segmentation algorithms of the developing human fetal brain. Four research groups participated, submitting a total of 10 algorithms, demonstrating the benefits the dataset for the development of automatic algorithms.
Collapse
Affiliation(s)
- Kelly Payette
- Center for MR Research, University Children's Hospital Zurich, University of Zurich, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich/ETH Zurich, Zurich, Switzerland.
| | - Priscille de Dumast
- CIBM, Center for Biomedical Imaging, Lausanne, Switzerland
- Medical Image Analysis Laboratory, Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Hamza Kebiri
- CIBM, Center for Biomedical Imaging, Lausanne, Switzerland
- Medical Image Analysis Laboratory, Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Ivan Ezhov
- Image-Based Biomedical Imaging Group, Technical University of Munich, München, Germany
| | - Johannes C Paetzold
- Image-Based Biomedical Imaging Group, Technical University of Munich, München, Germany
| | - Suprosanna Shit
- Image-Based Biomedical Imaging Group, Technical University of Munich, München, Germany
| | - Asim Iqbal
- Neuroscience Center Zurich, University of Zurich/ETH Zurich, Zurich, Switzerland
- Brain Research Institute, University of Zurich, Zurich, Switzerland
- Center for Intelligent Systems & Brain Mind Institute, Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland
| | - Romesa Khan
- Neuroscience Center Zurich, University of Zurich/ETH Zurich, Zurich, Switzerland
- Institute for Biomedical Engineering, UZH/ETH Zurich, Zurich, Switzerland
| | - Raimund Kottke
- Department of Diagnostic Imaging, University Children's Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Patrice Grehten
- Department of Diagnostic Imaging, University Children's Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Hui Ji
- Center for MR Research, University Children's Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Levente Lanczi
- Faculty of Medicine, Department of Medical Imaging, University of Debrecen, Debrecen, Hajdú-Bihar, Hungary
| | - Marianna Nagy
- Faculty of Medicine, Department of Medical Imaging, University of Debrecen, Debrecen, Hajdú-Bihar, Hungary
| | - Monika Beresova
- Faculty of Medicine, Department of Medical Imaging, University of Debrecen, Debrecen, Hajdú-Bihar, Hungary
| | - Thi Dao Nguyen
- Newborn Research, Department of Neonatology, University Hospital and University of Zurich, Zurich, Switzerland
| | - Giancarlo Natalucci
- Newborn Research, Department of Neonatology, University Hospital and University of Zurich, Zurich, Switzerland
- Larsson-Rosenquist Center for Neurodevelopment, Growth and Nutrition of the Newborn, Department of Neonatology, University Hospital and University of Zurich, Zurich, Switzerland
| | | | - Bjoern Menze
- Image-Based Biomedical Imaging Group, Technical University of Munich, München, Germany
| | - Meritxell Bach Cuadra
- CIBM, Center for Biomedical Imaging, Lausanne, Switzerland
- Medical Image Analysis Laboratory, Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Andras Jakab
- Center for MR Research, University Children's Hospital Zurich, University of Zurich, Zurich, Switzerland
- Neuroscience Center Zurich, University of Zurich/ETH Zurich, Zurich, Switzerland
| |
Collapse
|
116
|
Shi C, Xian M, Zhou X, Wang H, Cheng HD. Multi-slice low-rank tensor decomposition based multi-atlas segmentation: Application to automatic pathological liver CT segmentation. Med Image Anal 2021; 73:102152. [PMID: 34280669 DOI: 10.1016/j.media.2021.102152] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2021] [Revised: 06/02/2021] [Accepted: 06/27/2021] [Indexed: 12/24/2022]
Abstract
Liver segmentation from abdominal CT images is an essential step for liver cancer computer-aided diagnosis and surgical planning. However, both the accuracy and robustness of existing liver segmentation methods cannot meet the requirements of clinical applications. In particular, for the common clinical cases where the liver tissue contains major pathology, current segmentation methods show poor performance. In this paper, we propose a novel low-rank tensor decomposition (LRTD) based multi-atlas segmentation (MAS) framework that achieves accurate and robust pathological liver segmentation of CT images. Firstly, we propose a multi-slice LRTD scheme to recover the underlying low-rank structure embedded in 3D medical images. It performs the LRTD on small image segments consisting of multiple consecutive image slices. Then, we present an LRTD-based atlas construction method to generate tumor-free liver atlases that mitigates the performance degradation of liver segmentation due to the presence of tumors. Finally, we introduce an LRTD-based MAS algorithm to derive patient-specific liver atlases for each test image, and to achieve accurate pairwise image registration and label propagation. Extensive experiments on three public databases of pathological liver cases validate the effectiveness of the proposed method. Both qualitative and quantitative results demonstrate that, in the presence of major pathology, the proposed method is more accurate and robust than state-of-the-art methods.
Collapse
Affiliation(s)
- Changfa Shi
- Mobile E-business Collaborative Innovation Center of Hunan Province, Hunan University of Technology and Business, Changsha 410205, China; Department of Computer Science, Utah State University, Logan, UT 84322, USA
| | - Min Xian
- Department of Computer Science, University of Idaho, Idaho Falls, ID 83402, USA.
| | - Xiancheng Zhou
- Mobile E-business Collaborative Innovation Center of Hunan Province, Hunan University of Technology and Business, Changsha 410205, China
| | - Haotian Wang
- Department of Computer Science, University of Idaho, Idaho Falls, ID 83402, USA
| | - Heng-Da Cheng
- Department of Computer Science, Utah State University, Logan, UT 84322, USA
| |
Collapse
|
117
|
Esophagus Segmentation in CT Images via Spatial Attention Network and STAPLE Algorithm. SENSORS 2021; 21:s21134556. [PMID: 34283090 PMCID: PMC8271959 DOI: 10.3390/s21134556] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 06/24/2021] [Accepted: 06/28/2021] [Indexed: 11/16/2022]
Abstract
One essential step in radiotherapy treatment planning is the organ at risk of segmentation in Computed Tomography (CT). Many recent studies have focused on several organs such as the lung, heart, esophagus, trachea, liver, aorta, kidney, and prostate. However, among the above organs, the esophagus is one of the most difficult organs to segment because of its small size, ambiguous boundary, and very low contrast in CT images. To address these challenges, we propose a fully automated framework for the esophagus segmentation from CT images. The proposed method is based on the processing of slice images from the original three-dimensional (3D) image so that our method does not require large computational resources. We employ the spatial attention mechanism with the atrous spatial pyramid pooling module to locate the esophagus effectively, which enhances the segmentation performance. To optimize our model, we use group normalization because the computation is independent of batch sizes, and its performance is stable. We also used the simultaneous truth and performance level estimation (STAPLE) algorithm to reach robust results for segmentation. Firstly, our model was trained by k-fold cross-validation. And then, the candidate labels generated by each fold were combined by using the STAPLE algorithm. And as a result, Dice and Hausdorff Distance scores have an improvement when applying this algorithm to our segmentation results. Our method was evaluated on SegTHOR and StructSeg 2019 datasets, and the experiment shows that our method outperforms the state-of-the-art methods in esophagus segmentation. Our approach shows a promising result in esophagus segmentation, which is still challenging in medical analyses.
Collapse
|
118
|
Zarvani M, Saberi S, Azmi R, Shojaedini SV. Residual Learning: A New Paradigm to Improve Deep Learning-Based Segmentation of the Left Ventricle in Magnetic Resonance Imaging Cardiac Images. JOURNAL OF MEDICAL SIGNALS & SENSORS 2021; 11:159-168. [PMID: 34466395 PMCID: PMC8382035 DOI: 10.4103/jmss.jmss_38_20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 03/10/2021] [Accepted: 03/28/2021] [Indexed: 11/08/2022]
Abstract
BACKGROUND Recently, magnetic resonance imaging (MRI) has become a useful tool for the early detection of heart failure. A vital step of this process is a valid measurement of the left ventricle's properties, which seriously depends on the accurate segmentation of the heart in captured images. Although various schemes have been tested for this segmentation so far, the latest proposed methods have used the concept of deep learning to estimate the range of the left ventricle in cardiac MRI images. While deep learning methods can lead to better results than their classical alternatives, but unfortunately, the gradient vanishing and exploding problems may hamper their efficiency for the accurate segmentation of the left ventricle in MRI heart images. METHODS In this article, a new concept called residual learning is utilized to improve the performance of deep learning schemes against gradient vanishing problems. For this purpose, the Residual Network of Residual Network (i.e., Residual of Residual) substructure is utilized inside the main deep learning architecture (e.g., Unet), which provides more significant detection indexes. RESULTS AND CONCLUSION The proposed method's performances and its alternatives were evaluated on Sunnybrook Cardiac Data as a reliable dataset in the left ventricle segmentation. The results show that the detection parameters are improved at least by 5%, 3.5%, 8.1%, and 11.4% compared to its deep alternatives in terms of Jaccard, Dice, precision, and false-positive rate indexes, respectively. These improvements were made when the recall parameter was reduced to a negligible value (i.e., approximately 1%). Overall, the proposed method can be used as a suitable tool for more accurate detection of the left ventricle in MRI images.
Collapse
Affiliation(s)
- Maral Zarvani
- Faculty of Computer, Engineering Alzahra University, Tehran, Iran
| | - Sara Saberi
- Faculty of Computer, Engineering Alzahra University, Tehran, Iran
| | - Reza Azmi
- Faculty of Computer, Engineering Alzahra University, Tehran, Iran
| | | |
Collapse
|
119
|
Ying Y, Wang H, Chen H, Cheng J, Gu H, Shao Y, Duan Y, Feng A, Feng W, Fu X, Quan H, Xu Z. A novel specific grading standard study of auto-segmentation of organs at risk in thorax: subjective-objective-combined grading standard. Biomed Eng Online 2021; 20:54. [PMID: 34082755 PMCID: PMC8173789 DOI: 10.1186/s12938-021-00890-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2020] [Accepted: 05/24/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND To develop a novel subjective-objective-combined (SOC) grading standard for auto-segmentation for each organ at risk (OAR) in the thorax. METHODS A radiation oncologist manually delineated 13 thoracic OARs from computed tomography (CT) images of 40 patients. OAR auto-segmentation accuracy was graded by five geometric objective indexes, including the Dice similarity coefficient (DSC), the difference of the Euclidean distance between centers of mass (ΔCMD), the difference of volume (ΔV), maximum Hausdorff distance (MHD), and average Hausdorff distance (AHD). The grading results were compared with those of the corresponding geometric indexes obtained by geometric objective methods in the other two centers. OAR auto-segmentation accuracy was also graded by our subjective evaluation standard. These grading results were compared with those of DSC. Based on the subjective evaluation standard and the five geometric indexes, the correspondence between the subjective evaluation level and the geometric index range was established for each OAR. RESULTS For ΔCMD, ΔV, and MHD, the grading results of the geometric objective evaluation methods at our center and the other two centers were inconsistent. For DSC and AHD, the grading results of three centers were consistent. Seven OARs' grading results in the subjective evaluation standard were inconsistent with those of DSC. Six OARs' grading results in the subjective evaluation standard were consistent with those of DSC. Finally, we proposed a new evaluation method that combined the subjective evaluation level of those OARs with the range of corresponding DSC to determine the grading standard. If the DSC ranges between the adjacent levels did not overlap, the DSC range was used as the grading standard. Otherwise, the mean value of DSC was used as the grading standard. CONCLUSIONS A novel OAR-specific SOC grading standard in thorax was developed. The SOC grading standard provides a possible alternative for evaluation of the auto-segmentation accuracy for thoracic OARs.
Collapse
Affiliation(s)
- Yanchen Ying
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, 200030, China
- Key Laboratory of Artificial Micro- and Nano-Structures of Ministry of Education and Center for Electronic Microscopy and Department of Physics, Wuhan University, Wuhan, 430070, China
| | - Hao Wang
- Institute of Modern Physics, Fudan University, Shanghai, China
| | - Hua Chen
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Jianfan Cheng
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Hengle Gu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Yan Shao
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Yanhua Duan
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Aihui Feng
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Wen Feng
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Xiaolong Fu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Hong Quan
- Key Laboratory of Artificial Micro- and Nano-Structures of Ministry of Education and Center for Electronic Microscopy and Department of Physics, Wuhan University, Wuhan, 430070, China
| | - Zhiyong Xu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, 200030, China.
| |
Collapse
|
120
|
Ultra-short echo-time magnetic resonance imaging lung segmentation with under-Annotations and domain shift. Med Image Anal 2021; 72:102107. [PMID: 34153626 DOI: 10.1016/j.media.2021.102107] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2020] [Revised: 03/22/2021] [Accepted: 05/19/2021] [Indexed: 12/12/2022]
Abstract
Ultra-short echo-time (UTE) magnetic resonance imaging (MRI) provides enhanced visualization of pulmonary structural and functional abnormalities and has shown promise in phenotyping lung disease. Here, we describe the development and evaluation of a lung segmentation approach to facilitate UTE MRI methods for patient-based imaging. The proposed approach employs a k-means algorithm in kernel space for pair-wise feature clustering and imposes image domain continuous regularization, coined as continuous kernel k-means (CKKM). The high-order CKKM algorithm was simplified through upper bound relaxation and solved within an iterative continuous max-flow framework. We combined the CKKM with U-net and atlas-based approaches and comprehensively evaluated the performance on 100 images from 25 patients with asthma and bronchial pulmonary dysplasia enrolled at Robarts Research Institute (Western University, London, Canada) and Centre Hospitalier Universitaire (Sainte-Justine, Montreal, Canada). For U-net, we trained the network five times on a mixture of five different images with under-annotations and applied the model to 64 images from the two centres. We also trained a U-net on five images with full and brush annotations from one centre, and tested the model on 32 images from the other centre. For an atlas-based approach, we employed three atlas images to segment 64 target images from the two centres through straightforward atlas registration and label fusion. We applied the CKKM algorithm to the baseline U-net and atlas outputs and refined the initial segmentation through multi-volume image fusion. The integration of CKKM substantially improved baseline results and yielded, with minimal computational cost, segmentation accuracy, and precision that were greater than some state-of-the-art deep learning models and similar to experienced observer manual segmentation. This suggests that deep learning and atlas-based approaches may be utilized to segment UTE MRI datasets using relatively small training datasets with under-annotations.
Collapse
|
121
|
Aganj I, Fischl B. Multi-Atlas Image Soft Segmentation via Computation of the Expected Label Value. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1702-1710. [PMID: 33687840 PMCID: PMC8202781 DOI: 10.1109/tmi.2021.3064661] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The use of multiple atlases is common in medical image segmentation. This typically requires deformable registration of the atlases (or the average atlas) to the new image, which is computationally expensive and susceptible to entrapment in local optima. We propose to instead consider the probability of all possible atlas-to-image transformations and compute the expected label value (ELV), thereby not relying merely on the transformation deemed "optimal" by the registration method. Moreover, we do so without actually performing deformable registration, thus avoiding the associated computational costs. We evaluate our ELV computation approach by applying it to brain, liver, and pancreas segmentation on datasets of magnetic resonance and computed tomography images.
Collapse
|
122
|
Doyle PW, Kavoussi NL. Machine learning applications to enhance patient specific care for urologic surgery. World J Urol 2021; 40:679-686. [PMID: 34047826 DOI: 10.1007/s00345-021-03738-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 05/17/2021] [Indexed: 11/24/2022] Open
Abstract
PURPOSE As computational power has improved over the past 20 years, the daily application of machine learning methods has become more prevalent in daily life. Additionally, there is increasing interest in the clinical application of machine learning techniques. We sought to review the current literature regarding machine learning applications for patient-specific urologic surgical care. METHODS We performed a broad search of the current literature via the PubMed-Medline and Google Scholar databases up to Dec 2020. The search terms "urologic surgery" as well as "artificial intelligence", "machine learning", "neural network", and "automation" were used. RESULTS The focus of machine learning applications for patient counseling is disease-specific. For stone disease, multiple studies focused on the prediction of stone-free rate based on preoperative characteristics of clinical and imaging data. For kidney cancer, many studies focused on advanced imaging analysis to predict renal mass pathology preoperatively. Machine learning applications in prostate cancer could provide for treatment counseling as well as prediction of disease-specific outcomes. Furthermore, for bladder cancer, the reviewed studies focus on staging via imaging, to better counsel patients towards neoadjuvant chemotherapy. Additionally, there have been many efforts on automatically segmenting and matching preoperative imaging with intraoperative anatomy. CONCLUSION Machine learning techniques can be implemented to assist patient-centered surgical care and increase patient engagement within their decision-making processes. As data sets improve and expand, especially with the transition to large-scale EHR usage, these tools will improve in efficacy and be utilized more frequently.
Collapse
Affiliation(s)
- Patrick W Doyle
- Department of Urology, Vanderbilt University Medical Center, 3823 The Vanderbilt Clinic, Nashville, Tennessee, 37232, USA
| | - Nicholas L Kavoussi
- Department of Urology, Vanderbilt University Medical Center, 3823 The Vanderbilt Clinic, Nashville, Tennessee, 37232, USA.
| |
Collapse
|
123
|
Xu X, Lian C, Wang S, Zhu T, Chen RC, Wang AZ, Royce TJ, Yap PT, Shen D, Lian J. Asymmetric multi-task attention network for prostate bed segmentation in computed tomography images. Med Image Anal 2021; 72:102116. [PMID: 34217953 DOI: 10.1016/j.media.2021.102116] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 05/18/2021] [Accepted: 05/21/2021] [Indexed: 10/21/2022]
Abstract
Post-prostatectomy radiotherapy requires accurate annotation of the prostate bed (PB), i.e., the residual tissue after the operative removal of the prostate gland, to minimize side effects on surrounding organs-at-risk (OARs). However, PB segmentation in computed tomography (CT) images is a challenging task, even for experienced physicians. This is because PB is almost a "virtual" target with non-contrast boundaries and highly variable shapes depending on neighboring OARs. In this work, we propose an asymmetric multi-task attention network (AMTA-Net) for the concurrent segmentation of PB and surrounding OARs. Our AMTA-Net mimics experts in delineating the non-contrast PB by explicitly leveraging its critical dependency on the neighboring OARs (i.e., the bladder and rectum), which are relatively easy to distinguish in CT images. Specifically, we first adopt a U-Net as the backbone network for the low-level (or prerequisite) task of the OAR segmentation. Then, we build an attention sub-network upon the backbone U-Net with a series of cascaded attention modules, which can hierarchically transfer the OAR features and adaptively learn discriminative representations for the high-level (or primary) task of the PB segmentation. We comprehensively evaluate the proposed AMTA-Net on a clinical dataset composed of 186 CT images. According to the experimental results, our AMTA-Net significantly outperforms current clinical state-of-the-arts (i.e., atlas-based segmentation methods), indicating the value of our method in reducing time and labor in the clinical workflow. Our AMTA-Net also presents better performance than the technical state-of-the-arts (i.e., the deep learning-based segmentation methods), especially for the most indistinguishable and clinically critical part of the PB boundaries. Source code is released at https://github.com/superxuang/amta-net.
Collapse
Affiliation(s)
- Xuanang Xu
- Department of Radiology and Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Chunfeng Lian
- Department of Radiology and Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
| | - Shuai Wang
- Department of Radiology and Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, Shandong 264209, China
| | - Tong Zhu
- Department of Radiation Oncology, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Ronald C Chen
- Department of Radiation Oncology, University of Kansas Medical Center, Kansas City, KS 66160, USA
| | - Andrew Z Wang
- Department of Radiation Oncology, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Trevor J Royce
- Department of Radiation Oncology, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200030, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| | - Jun Lian
- Department of Radiation Oncology, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| |
Collapse
|
124
|
Liu L, Wolterink JM, Brune C, Veldhuis RNJ. Anatomy-aided deep learning for medical image segmentation: a review. Phys Med Biol 2021; 66. [PMID: 33906186 DOI: 10.1088/1361-6560/abfbf4] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 04/27/2021] [Indexed: 01/17/2023]
Abstract
Deep learning (DL) has become widely used for medical image segmentation in recent years. However, despite these advances, there are still problems for which DL-based segmentation fails. Recently, some DL approaches had a breakthrough by using anatomical information which is the crucial cue for manual segmentation. In this paper, we provide a review of anatomy-aided DL for medical image segmentation which covers systematically summarized anatomical information categories and corresponding representation methods. We address known and potentially solvable challenges in anatomy-aided DL and present a categorized methodology overview on using anatomical information with DL from over 70 papers. Finally, we discuss the strengths and limitations of the current anatomy-aided DL approaches and suggest potential future work.
Collapse
Affiliation(s)
- Lu Liu
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands.,Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Jelmer M Wolterink
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Christoph Brune
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Raymond N J Veldhuis
- Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| |
Collapse
|
125
|
Wang T, Lei Y, Roper J, Ghavidel B, Beitler JJ, McDonald M, Curran WJ, Liu T, Yang X. Head and neck multi-organ segmentation on dual-energy CT using dual pyramid convolutional neural networks. Phys Med Biol 2021; 66:10.1088/1361-6560/abfce2. [PMID: 33915524 PMCID: PMC11747937 DOI: 10.1088/1361-6560/abfce2] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 04/29/2021] [Indexed: 11/11/2022]
Abstract
Organ delineation is crucial to diagnosis and therapy, while it is also labor-intensive and observer-dependent. Dual energy CT (DECT) provides additional image contrast than conventional single energy CT (SECT), which may facilitate automatic organ segmentation. This work aims to develop an automatic multi-organ segmentation approach using deep learning for head-and-neck region on DECT. We proposed a mask scoring regional convolutional neural network (R-CNN) where comprehensive features are firstly learnt from two independent pyramid networks and are then combined via deep attention strategy to highlight the informative ones extracted from both two channels of low and high energy CT. To perform multi-organ segmentation and avoid misclassification, a mask scoring subnetwork was integrated into the Mask R-CNN framework to build the correlation between the class of potential detected organ's region-of-interest (ROI) and the shape of that organ's segmentation within that ROI. We evaluated our model on DECT images from 127 head-and-neck cancer patients (66 training, 61 testing) with manual contours of 19 organs as training target and ground truth. For large- and mid-sized organs such as brain and parotid, the proposed method successfully achieved average Dice similarity coefficient (DSC) larger than 0.8. For small-sized organs with very low contrast such as chiasm, cochlea, lens and optic nerves, the DSCs ranged between around 0.5 and 0.8. With the proposed method, using DECT images outperforms using SECT in almost all 19 organs with statistical significance in DSC (p<0.05). Meanwhile, by using the DECT, the proposed method is also significantly superior to a recently developed FCN-based method in most of organs in terms of DSC and the 95th percentile Hausdorff distance. Quantitative results demonstrated the feasibility of the proposed method, the superiority of using DECT to SECT, and the advantage of the proposed R-CNN over FCN on the head-and-neck patient study. The proposed method has the potential to facilitate the current head-and-neck cancer radiation therapy workflow in treatment planning.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Beth Ghavidel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Jonathan J Beitler
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Mark McDonald
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| |
Collapse
|
126
|
Zhang Z, Powell K, Yin C, Cao S, Gonzalez D, Hannawi Y, Zhang P. Brain Atlas Guided Attention U-Net for White Matter Hyperintensity Segmentation. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE PROCEEDINGS. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE 2021; 2021:663-671. [PMID: 34457182 PMCID: PMC8378613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
White Matter Hyperintensities (WMH) are the most common manifestation of cerebral small vessel disease (cSVD) on the brain MRI. Accurate WMH segmentation algorithms are important to determine cSVD burden and its clinical con-sequences. Most of existing WMH segmentation algorithms require both fluid attenuated inversion recovery (FLAIR) images and T1-weighted images as inputs. However, T1-weighted images are typically not part of standard clinical scans which are acquired for patients with acute stroke. In this paper, we propose a novel brain atlas guided attention U-Net (BAGAU-Net) that leverages only FLAIR images with a spatially-registered white matter (WM) brain atlas to yield competitive WMH segmentation performance. Specifically, we designed a dual-path segmentation model with two novel connecting mechanisms, namely multi-input attention module (MAM) and attention fusion module (AFM) to fuse the information from two paths for accurate results. Experiments on two publicly available datasets show the effectiveness of the proposed BAGAU-Net. With only FLAIR images and WM brain atlas, BAGAU-Net outperforms the state-of-the-art method with T1-weighted images, paving the way for effective development of WMH segmentation. Availability: https://github.com/Ericzhang1/BAGAU-Net.
Collapse
Affiliation(s)
- Zicong Zhang
- Computer Science and Engineering, The Ohio State University, Columbus, Ohio, USA
| | - Kimerly Powell
- Biomedical Informatics, The Ohio State University, Columbus, Ohio, USA
- Department of Radiology, The Ohio State University, Columbus, Ohio, USA
| | - Changchang Yin
- Computer Science and Engineering, The Ohio State University, Columbus, Ohio, USA
| | - Shilei Cao
- Tencent Jarvis Lab, Tencent, Shenzhen, China
| | - Dani Gonzalez
- Biomedical Engineering, The Ohio State University, Columbus, Ohio, USA
| | - Yousef Hannawi
- Department of Neurology, The Ohio State University, Columbus, Ohio, USA
- Corresponding authors: ;
| | - Ping Zhang
- Computer Science and Engineering, The Ohio State University, Columbus, Ohio, USA
- Biomedical Informatics, The Ohio State University, Columbus, Ohio, USA
- Corresponding authors: ;
| |
Collapse
|
127
|
Petit O, Thome N, Soler L. Iterative confidence relabeling with deep ConvNets for organ segmentation with partial labels. Comput Med Imaging Graph 2021; 91:101938. [PMID: 34153879 DOI: 10.1016/j.compmedimag.2021.101938] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 03/22/2021] [Accepted: 04/27/2021] [Indexed: 11/16/2022]
Abstract
Training deep ConvNets requires large labeled datasets. However, collecting pixel-level labels for medical image segmentation is very expensive and requires a high level of expertise. In addition, most existing segmentation masks provided by clinical experts focus on specific anatomical structures. In this paper, we propose a method dedicated to handle such partially labeled medical image datasets. We propose a strategy to identify pixels for which labels are correct, and to train Fully Convolutional Neural Networks with a multi-label loss adapted to this context. In addition, we introduce an iterative confidence self-training approach inspired by curriculum learning to relabel missing pixel labels, which relies on selecting the most confident prediction with a specifically designed confidence network that learns an uncertainty measure which is leveraged in our relabeling process. Our approach, INERRANT for Iterative coNfidencE Relabeling of paRtial ANnoTations, is thoroughly evaluated on two public datasets (TCAI and LITS), and one internal dataset with seven abdominal organ classes. We show that INERRANT robustly deals with partial labels, performing similarly to a model trained on all labels even for large missing label proportions. We also highlight the importance of our iterative learning scheme and the proposed confidence measure for optimal performance. Finally we show a practical use case where a limited number of completely labeled data are enriched by publicly available but partially labeled data.
Collapse
Affiliation(s)
- Olivier Petit
- CEDRIC, Conservatoire National des Arts et Metiers, 292 rue Saint-Martin, Paris, 75003, France; Visible Patient, 8 rue Gustave Adolphe Hirn, Strasbourg, 67000, France.
| | - Nicolas Thome
- CEDRIC, Conservatoire National des Arts et Metiers, 292 rue Saint-Martin, Paris, 75003, France
| | - Luc Soler
- Visible Patient, 8 rue Gustave Adolphe Hirn, Strasbourg, 67000, France
| |
Collapse
|
128
|
Tuzzi E, Balla DZ, Loureiro JRA, Neumann M, Laske C, Pohmann R, Preische O, Scheffler K, Hagberg GE. Ultra-High Field MRI in Alzheimer's Disease: Effective Transverse Relaxation Rate and Quantitative Susceptibility Mapping of Human Brain In Vivo and Ex Vivo compared to Histology. J Alzheimers Dis 2021; 73:1481-1499. [PMID: 31958079 DOI: 10.3233/jad-190424] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Alzheimer's disease (AD) is the most common cause of dementia worldwide. So far, diagnosis of AD is only unequivocally defined through postmortem histology. Amyloid plaques are a classical hallmark of AD and amyloid load is currently quantified by Positron Emission tomography (PET) in vivo. Ultra-high field magnetic resonance imaging (UHF-MRI) can potentially provide a non-invasive biomarker for AD by allowing imaging of pathological processes at a very-high spatial resolution. The first aim of this work was to reproduce the characteristic cortical pattern previously observed in vivo in AD patients using weighted-imaging at 7T. We extended these findings using quantitative susceptibility mapping (QSM) and quantification of the effective transverse relaxation rate (R2*) at 9.4T. The second aim was to investigate the origin of the contrast patterns observed in vivo in the cortex of AD patients at 9.4T by comparing quantitative UHF-MRI (9.4T and 14.1T) of postmortem samples with histology. We observed a distinctive cortical pattern in vivo in patients compared to healthy controls (HC), and these findings were confirmed ex vivo. Specifically, we found a close link between the signal changes detected by QSM in the AD sample at 14.1T and the distribution pattern of amyloid plaques in the histological sections of the same specimen. Our findings showed that QSM and R2* maps can distinguish AD from HC at UHF by detecting cortical alterations directly related to amyloid plaques in AD patients. Furthermore, we provided a method to quantify amyloid plaque load in AD patients at UHF non-invasively.
Collapse
Affiliation(s)
- Elisa Tuzzi
- Department for High Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Department for Biomedical Magnetic Resonance, Eberhard Karl's University, Tübingen and University Hospital, Tübingen, Germany
| | - David Z Balla
- Department for Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Joana R A Loureiro
- Department for High Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Department for Biomedical Magnetic Resonance, Eberhard Karl's University, Tübingen and University Hospital, Tübingen, Germany.,Ahmanson-Lovelace Brain Mapping Center, Department of Neurology, University of California, Los Angeles, CA, USA
| | - Manuela Neumann
- Department of Neuropathology, University Hospital, Tübingen, Germany.,German Center for Neurodegenerative Diseases (DZNE) Tübingen, Germany
| | - Christoph Laske
- German Center for Neurodegenerative Diseases (DZNE) Tübingen, Germany.,Section for Dementia Research, Hertie Institute for Clinical Brain Research and Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| | - Rolf Pohmann
- Department for High Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Oliver Preische
- German Center for Neurodegenerative Diseases (DZNE) Tübingen, Germany.,Section for Dementia Research, Hertie Institute for Clinical Brain Research and Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| | - Klaus Scheffler
- Department for High Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Department for Biomedical Magnetic Resonance, Eberhard Karl's University, Tübingen and University Hospital, Tübingen, Germany
| | - Gisela E Hagberg
- Department for High Field Magnetic Resonance, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Department for Biomedical Magnetic Resonance, Eberhard Karl's University, Tübingen and University Hospital, Tübingen, Germany
| |
Collapse
|
129
|
Veiga C, Lim P, Anaya VM, Chandy E, Ahmad R, D'Souza D, Gaze M, Moinuddin S, Gains J. Atlas construction and spatial normalisation to facilitate radiation-induced late effects research in childhood cancer. Phys Med Biol 2021; 66. [PMID: 33735848 PMCID: PMC8112163 DOI: 10.1088/1361-6560/abf010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 03/18/2021] [Indexed: 11/12/2022]
Abstract
Reducing radiation-induced side effects is one of the most important challenges in paediatric cancer treatment. Recently, there has been growing interest in using spatial normalisation to enable voxel-based analysis of radiation-induced toxicities in a variety of patient groups. The need to consider three-dimensional distribution of doses, rather than dose-volume histograms, is desirable but not yet explored in paediatric populations. In this paper, we investigate the feasibility of atlas construction and spatial normalisation in paediatric radiotherapy. We used planning computed tomography (CT) scans from twenty paediatric patients historically treated with craniospinal irradiation to generate a template CT that is suitable for spatial normalisation. This childhood cancer population representative template was constructed using groupwise image registration. An independent set of 53 subjects from a variety of childhood malignancies was then used to assess the quality of the propagation of new subjects to this common reference space using deformable image registration (i.e. spatial normalisation). The method was evaluated in terms of overall image similarity metrics, contour similarity and preservation of dose-volume properties. After spatial normalisation, we report a dice similarity coefficient of 0.95 ± 0.05, 0.85 ± 0.04, 0.96 ± 0.01, 0.91 ± 0.03, 0.83 ± 0.06 and 0.65 ± 0.16 for brain and spinal canal, ocular globes, lungs, liver, kidneys and bladder. We then demonstrated the potential advantages of an atlas-based approach to study the risk of second malignant neoplasms after radiotherapy. Our findings indicate satisfactory mapping between a heterogeneous group of patients and the template CT. The poorest performance was for organs in the abdominal and pelvic region, likely due to respiratory and physiological motion and to the highly deformable nature of abdominal organs. More specialised algorithms should be explored in the future to improve mapping in these regions. This study is the first step toward voxel-based analysis in radiation-induced toxicities following paediatric radiotherapy.
Collapse
Affiliation(s)
- Catarina Veiga
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Pei Lim
- Department of Oncology, University College London Hospital NHS Foundation Trust, London, United Kingdom
| | - Virginia Marin Anaya
- Radiotherapy Physics Services, University College London Hospital NHS Foundation Trust, London, United Kingdom
| | - Edward Chandy
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom.,UCL Cancer Institute, University College London, London, United Kingdom
| | - Reem Ahmad
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
| | - Derek D'Souza
- Radiotherapy Physics Services, University College London Hospital NHS Foundation Trust, London, United Kingdom
| | - Mark Gaze
- Department of Oncology, University College London Hospital NHS Foundation Trust, London, United Kingdom
| | - Syed Moinuddin
- Radiotherapy, University College London Hospital NHS Foundation Trust, London, United Kingdom
| | - Jennifer Gains
- Department of Oncology, University College London Hospital NHS Foundation Trust, London, United Kingdom
| |
Collapse
|
130
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 77] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
131
|
Amiri S, Akbarabadi M, Abdolali F, Nikoofar A, Esfahani AJ, Cheraghi S. Radiomics analysis on CT images for prediction of radiation-induced kidney damage by machine learning models. Comput Biol Med 2021; 133:104409. [PMID: 33940534 DOI: 10.1016/j.compbiomed.2021.104409] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 04/14/2021] [Accepted: 04/14/2021] [Indexed: 01/08/2023]
Abstract
INTRODUCTION We aimed to assess the power of radiomic features based on computed tomography to predict risk of chronic kidney disease in patients undergoing radiation therapy of abdominal cancers. METHODS 50 patients were evaluated for chronic kidney disease 12 months after completion of abdominal radiation therapy. At the first step, the region of interest was automatically extracted using deep learning models in computed tomography images. Afterward, a combination of radiomic and clinical features was extracted from the region of interest to build a radiomic signature. Finally, six popular classifiers, including Bernoulli Naive Bayes, Decision Tree, Gradient Boosting Decision Trees, K-Nearest Neighbor, Random Forest, and Support Vector Machine, were used to predict chronic kidney disease. Evaluation criteria were as follows: accuracy, sensitivity, specificity, and area under the ROC curve. RESULTS Most of the patients (58%) experienced chronic kidney disease. A total of 140 radiomic features were extracted from the segmented area. Among the six classifiers, Random Forest performed best with the accuracy and AUC of 94% and 0.99, respectively. CONCLUSION Based on the quantitative results, we showed that a combination of radiomic and clinical features could predict chronic kidney radiation toxicities. The effect of factors such as renal radiation dose, irradiated renal volume, and urine volume 24-h on CKD was proved in this study.
Collapse
Affiliation(s)
- Sepideh Amiri
- Department of Information Technology, Faculty of Electrical and Computer Engineering, University of Tehran, Tehran, Iran.
| | - Mina Akbarabadi
- Department of Information Technology, Faculty of Industrial Engineering, K. N. Toosi University of Technology, Tehran, Iran.
| | - Fatemeh Abdolali
- Department of Radiology and Diagnostic Imaging, Faculty of Medicine and Dentistry, Alberta University, Edmonton, AB, Canada.
| | - Alireza Nikoofar
- Department of Radiation Oncology, Faculty of Medicine, Iran University of Medical Sciences, Tehran, Iran.
| | - Azam Janati Esfahani
- Department of Medical Biotechnology, School of Paramedical Sciences and Cellular and Molecular Research Center, Research Institute for Prevention of Non-Communicable Diseases, Qazvin University of Medical Sciences, Qazvin, Iran.
| | - Susan Cheraghi
- Radiation Biology Research Center, Iran University of Medical Sciences, Tehran, Iran; Department of Radiation Sciences, Faculty of Allied Medicine, Iran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
132
|
Lei Y, Wang T, Tian S, Fu Y, Patel P, Jani AB, Curran WJ, Liu T, Yang X. Male pelvic CT multi-organ segmentation using synthetic MRI-aided dual pyramid networks. Phys Med Biol 2021; 66:10.1088/1361-6560/abf2f9. [PMID: 33780918 PMCID: PMC11755409 DOI: 10.1088/1361-6560/abf2f9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 03/29/2021] [Indexed: 12/17/2022]
Abstract
The delineation of the prostate and organs-at-risk (OARs) is fundamental to prostate radiation treatment planning, but is currently labor-intensive and observer-dependent. We aimed to develop an automated computed tomography (CT)-based multi-organ (bladder, prostate, rectum, left and right femoral heads (RFHs)) segmentation method for prostate radiation therapy treatment planning. The proposed method uses synthetic MRIs (sMRIs) to offer superior soft-tissue information for male pelvic CT images. Cycle-consistent adversarial networks (CycleGAN) were used to generate CT-based sMRIs. Dual pyramid networks (DPNs) extracted features from both CTs and sMRIs. A deep attention strategy was integrated into the DPNs to select the most relevant features from both CTs and sMRIs to identify organ boundaries. The CT-based sMRI generated from our previously trained CycleGAN and its corresponding CT images were inputted to the proposed DPNs to provide complementary information for pelvic multi-organ segmentation. The proposed method was trained and evaluated using datasets from 140 patients with prostate cancer, and were then compared against state-of-art methods. The Dice similarity coefficients and mean surface distances between our results and ground truth were 0.95 ± 0.05, 1.16 ± 0.70 mm; 0.88 ± 0.08, 1.64 ± 1.26 mm; 0.90 ± 0.04, 1.27 ± 0.48 mm; 0.95 ± 0.04, 1.08 ± 1.29 mm; and 0.95 ± 0.04, 1.11 ± 1.49 mm for bladder, prostate, rectum, left and RFHs, respectively. Mean center of mass distances was within 3 mm for all organs. Our results performed significantly better than those of competing methods in most evaluation metrics. We demonstrated the feasibility of sMRI-aided DPNs for multi-organ segmentation on pelvic CT images, and its superiority over other networks. The proposed method could be used in routine prostate cancer radiotherapy treatment planning to rapidly segment the prostate and standard OARs.
Collapse
Affiliation(s)
| | | | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| |
Collapse
|
133
|
Ding Z, Niethammer M. VOTENET++: REGISTRATION REFINEMENT FOR MULTI-ATLAS SEGMENTATION. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2021; 2021:275-279. [PMID: 39247161 PMCID: PMC11378331 DOI: 10.1109/isbi48211.2021.9434031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/10/2024]
Abstract
Multi-atlas segmentation (MAS) is a popular image segmentation technique for medical images. In this work, we improve the performance of MAS by correcting registration errors before label fusion. Specifically, we use a volumetric displacement field to refine registrations based on image anatomical appearance and predicted labels. We show the influence of the initial spatial alignment as well as the beneficial effect of using label information for MAS performance. Experiments demonstrate that the proposed refinement approach improves MAS performance on a 3D magnetic resonance dataset of the knee.
Collapse
Affiliation(s)
- Zhipeng Ding
- Department of Computer Science, UNC Chapel Hill, USA
| | | |
Collapse
|
134
|
Ogier AC, Hostin MA, Bellemare ME, Bendahan D. Overview of MR Image Segmentation Strategies in Neuromuscular Disorders. Front Neurol 2021; 12:625308. [PMID: 33841299 PMCID: PMC8027248 DOI: 10.3389/fneur.2021.625308] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Accepted: 02/08/2021] [Indexed: 01/10/2023] Open
Abstract
Neuromuscular disorders are rare diseases for which few therapeutic strategies currently exist. Assessment of therapeutic strategies efficiency is limited by the lack of biomarkers sensitive to the slow progression of neuromuscular diseases (NMD). Magnetic resonance imaging (MRI) has emerged as a tool of choice for the development of qualitative scores for the study of NMD. The recent emergence of quantitative MRI has enabled to provide quantitative biomarkers more sensitive to the evaluation of pathological changes in muscle tissue. However, in order to extract these biomarkers from specific regions of interest, muscle segmentation is mandatory. The time-consuming aspect of manual segmentation has limited the evaluation of these biomarkers on large cohorts. In recent years, several methods have been proposed to make the segmentation step automatic or semi-automatic. The purpose of this study was to review these methods and discuss their reliability, reproducibility, and limitations in the context of NMD. A particular attention has been paid to recent deep learning methods, as they have emerged as an effective method of image segmentation in many other clinical contexts.
Collapse
Affiliation(s)
- Augustin C Ogier
- Aix Marseille Univ, Université de Toulon, CNRS, LIS, Marseille, France
| | - Marc-Adrien Hostin
- Aix Marseille Univ, Université de Toulon, CNRS, LIS, Marseille, France.,Aix Marseille Univ, CNRS, CRMBM, UMR 7339, Marseille, France
| | | | - David Bendahan
- Aix Marseille Univ, CNRS, CRMBM, UMR 7339, Marseille, France
| |
Collapse
|
135
|
Brain tissues have single-voxel signatures in multi-spectral MRI. Neuroimage 2021; 234:117986. [PMID: 33757906 DOI: 10.1016/j.neuroimage.2021.117986] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 03/03/2021] [Accepted: 03/15/2021] [Indexed: 12/20/2022] Open
Abstract
Since the seminal works by Brodmann and contemporaries, it is well-known that different brain regions exhibit unique cytoarchitectonic and myeloarchitectonic features. Transferring the approach of classifying brain tissues - and other tissues - based on their intrinsic features to the realm of magnetic resonance (MR) is a longstanding endeavor. In the 1990s, atlas-based segmentation replaced earlier multi-spectral classification approaches because of the large overlap between the class distributions. Here, we explored the feasibility of performing global brain classification based on intrinsic MR features, and used several technological advances: ultra-high field MRI, q-space trajectory diffusion imaging revealing voxel-intrinsic diffusion properties, chemical exchange saturation transfer and semi-solid magnetization transfer imaging as a marker of myelination and neurochemistry, and current neural network architectures to analyze the data. In particular, we used the raw image data as well to increase the number of input features. We found that a global brain classification of roughly 97 brain regions was feasible with gross classification accuracy of 60%; and that mapping from voxel-intrinsic MR data to the brain region to which the data belongs is possible. This indicates the presence of unique MR signals of different brain regions, similar to their cytoarchitectonic and myeloarchitectonic fingerprints.
Collapse
|
136
|
Hann E, Popescu IA, Zhang Q, Gonzales RA, Barutçu A, Neubauer S, Ferreira VM, Piechnik SK. Deep neural network ensemble for on-the-fly quality control-driven segmentation of cardiac MRI T1 mapping. Med Image Anal 2021; 71:102029. [PMID: 33831594 PMCID: PMC8204226 DOI: 10.1016/j.media.2021.102029] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 02/22/2021] [Accepted: 03/01/2021] [Indexed: 02/07/2023]
Abstract
Quality control-driven framework for cardiac segmentation and quality control. Exploiting variability within deep neural network ensemble to estimate uncertainty. Novel on-the-fly selection mechanism for the final optimal segmentation. Accurate, reliable, and fully automated analysis of T1 map with visualization. Highlighting a potential flaw of the Pearson correlation to evaluate quality score.
Recent developments in artificial intelligence have generated increasing interest to deploy automated image analysis for diagnostic imaging and large-scale clinical applications. However, inaccuracy from automated methods could lead to incorrect conclusions, diagnoses or even harm to patients. Manual inspection for potential inaccuracies is labor-intensive and time-consuming, hampering progress towards fast and accurate clinical reporting in high volumes. To promote reliable fully-automated image analysis, we propose a quality control-driven (QCD) segmentation framework. It is an ensemble of neural networks that integrate image analysis and quality control. The novelty of this framework is the selection of the most optimal segmentation based on predicted segmentation accuracy, on-the-fly. Additionally, this framework visualizes segmentation agreement to provide traceability of the quality control process. In this work, we demonstrated the utility of the framework in cardiovascular magnetic resonance T1-mapping - a quantitative technique for myocardial tissue characterization. The framework achieved near-perfect agreement with expert image analysts in estimating myocardial T1 value (r=0.987,p<.0005; mean absolute error (MAE)=11.3ms), with accurate segmentation quality prediction (Dice coefficient prediction MAE=0.0339) and classification (accuracy=0.99), and a fast average processing time of 0.39 second/image. In summary, the QCD framework can generate high-throughput automated image analysis with speed and accuracy that is highly desirable for large-scale clinical applications.
Collapse
Affiliation(s)
- Evan Hann
- Oxford University Centre for Clinical Magnetic Resonance Research (OCMR), Level 0, John Radcliffe Hospital, Headington, Oxford OX3 9DU, United Kingdom.
| | - Iulia A Popescu
- Oxford University Centre for Clinical Magnetic Resonance Research (OCMR), Level 0, John Radcliffe Hospital, Headington, Oxford OX3 9DU, United Kingdom
| | - Qiang Zhang
- Oxford University Centre for Clinical Magnetic Resonance Research (OCMR), Level 0, John Radcliffe Hospital, Headington, Oxford OX3 9DU, United Kingdom
| | - Ricardo A Gonzales
- Oxford University Centre for Clinical Magnetic Resonance Research (OCMR), Level 0, John Radcliffe Hospital, Headington, Oxford OX3 9DU, United Kingdom
| | - Ahmet Barutçu
- Oxford University Centre for Clinical Magnetic Resonance Research (OCMR), Level 0, John Radcliffe Hospital, Headington, Oxford OX3 9DU, United Kingdom; Çanakkale Onsekiz Mart University, Barbaros, 17100 Kepez/Çanakkale Merkez/Çanakkale, Turkey
| | - Stefan Neubauer
- Oxford University Centre for Clinical Magnetic Resonance Research (OCMR), Level 0, John Radcliffe Hospital, Headington, Oxford OX3 9DU, United Kingdom
| | - Vanessa M Ferreira
- Oxford University Centre for Clinical Magnetic Resonance Research (OCMR), Level 0, John Radcliffe Hospital, Headington, Oxford OX3 9DU, United Kingdom
| | - Stefan K Piechnik
- Oxford University Centre for Clinical Magnetic Resonance Research (OCMR), Level 0, John Radcliffe Hospital, Headington, Oxford OX3 9DU, United Kingdom
| |
Collapse
|
137
|
Rocchi F, Oya H, Balezeau F, Billig AJ, Kocsis Z, Jenison RL, Nourski KV, Kovach CK, Steinschneider M, Kikuchi Y, Rhone AE, Dlouhy BJ, Kawasaki H, Adolphs R, Greenlee JDW, Griffiths TD, Howard MA, Petkov CI. Common fronto-temporal effective connectivity in humans and monkeys. Neuron 2021; 109:852-868.e8. [PMID: 33482086 PMCID: PMC7927917 DOI: 10.1016/j.neuron.2020.12.026] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 10/02/2020] [Accepted: 12/30/2020] [Indexed: 01/24/2023]
Abstract
Human brain pathways supporting language and declarative memory are thought to have differentiated substantially during evolution. However, cross-species comparisons are missing on site-specific effective connectivity between regions important for cognition. We harnessed functional imaging to visualize the effects of direct electrical brain stimulation in macaque monkeys and human neurosurgery patients. We discovered comparable effective connectivity between caudal auditory cortex and both ventro-lateral prefrontal cortex (VLPFC, including area 44) and parahippocampal cortex in both species. Human-specific differences were clearest in the form of stronger hemispheric lateralization effects. In humans, electrical tractography revealed remarkably rapid evoked potentials in VLPFC following auditory cortex stimulation and speech sounds drove VLPFC, consistent with prior evidence in monkeys of direct auditory cortex projections to homologous vocalization-responsive regions. The results identify a common effective connectivity signature in human and nonhuman primates, which from auditory cortex appears equally direct to VLPFC and indirect to the hippocampus. VIDEO ABSTRACT.
Collapse
Affiliation(s)
- Francesca Rocchi
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK.
| | - Hiroyuki Oya
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA.
| | - Fabien Balezeau
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | | | - Zsuzsanna Kocsis
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK; Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Rick L Jenison
- Department of Neuroscience, University of Wisconsin - Madison, Madison, WI, USA
| | - Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA
| | | | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Yukiko Kikuchi
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Brian J Dlouhy
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Ralph Adolphs
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Jeremy D W Greenlee
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK; Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA; Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA, USA
| | - Christopher I Petkov
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK.
| |
Collapse
|
138
|
Li J, Udupa JK, Tong Y, Odhner D, Torigian DA. Anatomy Recognition in CT Images of Head & Neck Region via Precision Atlases. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2021; 11596:1159633. [PMID: 34887608 PMCID: PMC8653545 DOI: 10.1117/12.2581234] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Multi-atlas segmentation methods will benefit from atlases covering the complete spectrum of population patterns, while the difficulties in generating such large enough datasets and the computation burden required in the segmentation procedure reduce its practicality in clinical application. In this work, we start from a viewpoint that different parts of the target object can be recognized by different atlases and propose a precision atlas selection strategy. By comparing regional similarity between target image and atlases, precision atlases are ranked and selected by the frequency of regional best match, which have no need to be globally similar to the target subject at either image-level or object-level, largely increasing the implicit patterns contained in the atlas set. In the proposed anatomy recognition method, atlas building is first achieved by all-to-template registration, where the minimum spanning tree (MST) strategy is used to select a registration template from a subset of radiologically near-normal images. Then, a two-stage recognition process is conducted: in rough recognition, sub-image level similarity is calculated between the test image and each image of the whole atlas set, and only the atlas with the highest similarity contributes to the recognition map regionally; in refined recognition, the atlases with the highest frequencies of best match are selected as the precision atlases and are utilized to further increase the accuracy of boundary matching. The proposed method is demonstrated on 298 computed tomography (CT) images and 9 organs in the Head & Neck (H&N) body region. Experimental results illustrate that our method is effective for organs with different segmentation challenge and samples with different image quality, where remarkable improvement in boundary interpretation is made by refined recognition and most objects achieve a localization error within 2 voxels.
Collapse
Affiliation(s)
- Jieyu Li
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania
| | - Dewey Odhner
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania
| |
Collapse
|
139
|
Ekström S, Pilia M, Kullberg J, Ahlström H, Strand R, Malmberg F. Faster dense deformable image registration by utilizing both CPU and GPU. J Med Imaging (Bellingham) 2021; 8:014002. [PMID: 33542943 PMCID: PMC7849043 DOI: 10.1117/1.jmi.8.1.014002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Accepted: 12/31/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Image registration is an important aspect of medical image analysis and a key component in many analysis concepts. Applications include fusion of multimodal images, multi-atlas segmentation, and whole-body analysis. Deformable image registration is often computationally expensive, and the need for efficient registration methods is highlighted by the emergence of large-scale image databases, e.g., the UK Biobank, providing imaging from 100,000 participants. Approach: We present a heterogeneous computing approach, utilizing both the CPU and the graphics processing unit (GPU), to accelerate a previously proposed image registration method. The parallelizable task of computing the matching criterion is offloaded to the GPU, where it can be computed efficiently, while the more complex optimization task is performed on the CPU. To lessen the impact of data synchronization between the CPU and GPU, we propose a pipeline model, effectively overlapping computational tasks with data synchronization. The performance is evaluated on a brain labeling task and compared with a CPU implementation of the same method and the popular advanced normalization tools (ANTs) software. Results: The proposed method presents a speed-up by factors of 4 and 8 against the CPU implementation and the ANTs software, respectively. A significant improvement in labeling quality was also observed, with measured mean Dice overlaps of 0.712 and 0.701 for our method and ANTs, respectively. Conclusions: We showed that the proposed method compares favorably to the ANTs software yielding both a significant speed-up and an improvement in labeling quality. The registration method together with the proposed parallelization strategy is implemented as an open-source software package, deform.
Collapse
Affiliation(s)
- Simon Ekström
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden.,Antaros Medical, Mölndal, Sweden
| | - Martino Pilia
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden
| | - Joel Kullberg
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden.,Antaros Medical, Mölndal, Sweden
| | - Håkan Ahlström
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden.,Antaros Medical, Mölndal, Sweden
| | - Robin Strand
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden.,Uppsala University, Centre for Image Analysis, Division of Visual Information and Interaction, Department of Information Technology, Uppsala, Sweden
| | - Filip Malmberg
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden.,Uppsala University, Centre for Image Analysis, Division of Visual Information and Interaction, Department of Information Technology, Uppsala, Sweden
| |
Collapse
|
140
|
Carmo D, Silva B, Yasuda C, Rittner L, Lotufo R. Hippocampus segmentation on epilepsy and Alzheimer's disease studies with multiple convolutional neural networks. Heliyon 2021; 7:e06226. [PMID: 33659748 PMCID: PMC7892928 DOI: 10.1016/j.heliyon.2021.e06226] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 12/06/2020] [Accepted: 02/03/2021] [Indexed: 12/26/2022] Open
Abstract
Background: Hippocampus segmentation on magnetic resonance imaging is of key importance for the diagnosis, treatment decision and investigation of neuropsychiatric disorders. Automatic segmentation is an active research field, with many recent models using deep learning. Most current state-of-the art hippocampus segmentation methods train their methods on healthy or Alzheimer's disease patients from public datasets. This raises the question whether these methods are capable of recognizing the hippocampus on a different domain, that of epilepsy patients with hippocampus resection. New Method: In this paper we present a state-of-the-art, open source, ready-to-use, deep learning based hippocampus segmentation method. It uses an extended 2D multi-orientation approach, with automatic pre-processing and orientation alignment. The methodology was developed and validated using HarP, a public Alzheimer's disease hippocampus segmentation dataset. Results and Comparisons: We test this methodology alongside other recent deep learning methods, in two domains: The HarP test set and an in-house epilepsy dataset, containing hippocampus resections, named HCUnicamp. We show that our method, while trained only in HarP, surpasses others from the literature in both the HarP test set and HCUnicamp in Dice. Additionally, Results from training and testing in HCUnicamp volumes are also reported separately, alongside comparisons between training and testing in epilepsy and Alzheimer's data and vice versa. Conclusion: Although current state-of-the-art methods, including our own, achieve upwards of 0.9 Dice in HarP, all tested methods, including our own, produced false positives in HCUnicamp resection regions, showing that there is still room for improvement for hippocampus segmentation methods when resection is involved.
Collapse
Affiliation(s)
- Diedre Carmo
- School of Electrical and Computer Engineering, UNICAMP, Campinas, São Paulo, Brazil
| | - Bruna Silva
- Faculty of Medical Sciences, UNICAMP, Campinas, São Paulo, Brazil
| | | | - Clarissa Yasuda
- Faculty of Medical Sciences, UNICAMP, Campinas, São Paulo, Brazil
| | - Letícia Rittner
- School of Electrical and Computer Engineering, UNICAMP, Campinas, São Paulo, Brazil
| | - Roberto Lotufo
- School of Electrical and Computer Engineering, UNICAMP, Campinas, São Paulo, Brazil
| |
Collapse
|
141
|
Jeong H, Ntolkeras G, Alhilani M, Atefi SR, Zöllei L, Fujimoto K, Pourvaziri A, Lev MH, Grant PE, Bonmassar G. Development, validation, and pilot MRI safety study of a high-resolution, open source, whole body pediatric numerical simulation model. PLoS One 2021; 16:e0241682. [PMID: 33439896 PMCID: PMC7806143 DOI: 10.1371/journal.pone.0241682] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Accepted: 10/19/2020] [Indexed: 11/30/2022] Open
Abstract
Numerical body models of children are used for designing medical devices, including but not limited to optical imaging, ultrasound, CT, EEG/MEG, and MRI. These models are used in many clinical and neuroscience research applications, such as radiation safety dosimetric studies and source localization. Although several such adult models have been reported, there are few reports of full-body pediatric models, and those described have several limitations. Some, for example, are either morphed from older children or do not have detailed segmentations. Here, we introduce a 29-month-old male whole-body native numerical model, "MARTIN", that includes 28 head and 86 body tissue compartments, segmented directly from the high spatial resolution MRI and CT images. An advanced auto-segmentation tool was used for the deep-brain structures, whereas 3D Slicer was used to segment the non-brain structures and to refine the segmentation for all of the tissue compartments. Our MARTIN model was developed and validated using three separate approaches, through an iterative process, as follows. First, the calculated volumes, weights, and dimensions of selected structures were adjusted and confirmed to be within 6% of the literature values for the 2-3-year-old age-range. Second, all structural segmentations were adjusted and confirmed by two experienced, sub-specialty certified neuro-radiologists, also through an interactive process. Third, an additional validation was performed with a Bloch simulator to create synthetic MR image from our MARTIN model and compare the image contrast of the resulting synthetic image with that of the original MRI data; this resulted in a "structural resemblance" index of 0.97. Finally, we used our model to perform pilot MRI safety simulations of an Active Implantable Medical Device (AIMD) using a commercially available software platform (Sim4Life), incorporating the latest International Standards Organization guidelines. This model will be made available on the Athinoula A. Martinos Center for Biomedical Imaging website.
Collapse
Affiliation(s)
- Hongbae Jeong
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Georgios Ntolkeras
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Michel Alhilani
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States of America
- Department of Medicine, Charing Cross Hospital, Imperial College Healthcare NHS Trust, London, United Kingdom
| | - Seyed Reza Atefi
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Lilla Zöllei
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Kyoko Fujimoto
- Center for Devices and Radiological Health, U. S. Food and Drug Administration, Silver Spring, MD, United States of America
| | - Ali Pourvaziri
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Michael H. Lev
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| | - P. Ellen Grant
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children’s Hospital, Harvard Medical School, Boston, MA, United States of America
| | - Giorgio Bonmassar
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States of America
| |
Collapse
|
142
|
Dorent R, Booth T, Li W, Sudre CH, Kafiabadi S, Cardoso J, Ourselin S, Vercauteren T. Learning joint segmentation of tissues and brain lesions from task-specific hetero-modal domain-shifted datasets. Med Image Anal 2021; 67:101862. [PMID: 33129151 PMCID: PMC7116853 DOI: 10.1016/j.media.2020.101862] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Revised: 09/09/2020] [Accepted: 09/25/2020] [Indexed: 12/14/2022]
Abstract
Brain tissue segmentation from multimodal MRI is a key building block of many neuroimaging analysis pipelines. Established tissue segmentation approaches have, however, not been developed to cope with large anatomical changes resulting from pathology, such as white matter lesions or tumours, and often fail in these cases. In the meantime, with the advent of deep neural networks (DNNs), segmentation of brain lesions has matured significantly. However, few existing approaches allow for the joint segmentation of normal tissue and brain lesions. Developing a DNN for such a joint task is currently hampered by the fact that annotated datasets typically address only one specific task and rely on task-specific imaging protocols including a task-specific set of imaging modalities. In this work, we propose a novel approach to build a joint tissue and lesion segmentation model from aggregated task-specific hetero-modal domain-shifted and partially-annotated datasets. Starting from a variational formulation of the joint problem, we show how the expected risk can be decomposed and optimised empirically. We exploit an upper bound of the risk to deal with heterogeneous imaging modalities across datasets. To deal with potential domain shift, we integrated and tested three conventional techniques based on data augmentation, adversarial learning and pseudo-healthy generation. For each individual task, our joint approach reaches comparable performance to task-specific and fully-supervised models. The proposed framework is assessed on two different types of brain lesions: White matter lesions and gliomas. In the latter case, lacking a joint ground-truth for quantitative assessment purposes, we propose and use a novel clinically-relevant qualitative assessment methodology.
Collapse
Affiliation(s)
- Reuben Dorent
- King's College London, School of Biomedical Engineering & Imaging Sciences, St. Thomas' Hospital, London, United Kingdom.
| | - Thomas Booth
- King's College London, School of Biomedical Engineering & Imaging Sciences, St. Thomas' Hospital, London, United Kingdom; Department of Neuroradiology, King's College Hospital NHS Foundation Trust, London, United Kingdom
| | - Wenqi Li
- King's College London, School of Biomedical Engineering & Imaging Sciences, St. Thomas' Hospital, London, United Kingdom; NVIDIA, Cambridge, United Kingdom
| | - Carole H Sudre
- King's College London, School of Biomedical Engineering & Imaging Sciences, St. Thomas' Hospital, London, United Kingdom; Dementia Research Centre, UCL Institute of Neurology, UCL, London, United Kingdom; Department of Medical Physics, UCL, London, United Kingdom
| | - Sina Kafiabadi
- Department of Neuroradiology, King's College Hospital NHS Foundation Trust, London, United Kingdom
| | - Jorge Cardoso
- King's College London, School of Biomedical Engineering & Imaging Sciences, St. Thomas' Hospital, London, United Kingdom
| | - Sebastien Ourselin
- King's College London, School of Biomedical Engineering & Imaging Sciences, St. Thomas' Hospital, London, United Kingdom
| | - Tom Vercauteren
- King's College London, School of Biomedical Engineering & Imaging Sciences, St. Thomas' Hospital, London, United Kingdom
| |
Collapse
|
143
|
Singh MK, Singh KK. A Review of Publicly Available Automatic Brain Segmentation Methodologies, Machine Learning Models, Recent Advancements, and Their Comparison. Ann Neurosci 2021; 28:82-93. [PMID: 34733059 PMCID: PMC8558983 DOI: 10.1177/0972753121990175] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 01/04/2021] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND The noninvasive study of the structure and functions of the brain using neuroimaging techniques is increasingly being used for its clinical and research perspective. The morphological and volumetric changes in several regions and structures of brains are associated with the prognosis of neurological disorders such as Alzheimer's disease, epilepsy, schizophrenia, etc. and the early identification of such changes can have huge clinical significance. The accurate segmentation of three-dimensional brain magnetic resonance images into tissue types (i.e., grey matter, white matter, cerebrospinal fluid) and brain structures, thus, has huge importance as they can act as early biomarkers. The manual segmentation though considered the "gold standard" is time-consuming, subjective, and not suitable for bigger neuroimaging studies. Several automatic segmentation tools and algorithms have been developed over the years; the machine learning models particularly those using deep convolutional neural network (CNN) architecture are increasingly being applied to improve the accuracy of automatic methods. PURPOSE The purpose of the study is to understand the current and emerging state of automatic segmentation tools, their comparison, machine learning models, their reliability, and shortcomings with an intent to focus on the development of improved methods and algorithms. METHODS The study focuses on the review of publicly available neuroimaging tools, their comparison, and emerging machine learning models particularly those based on CNN architecture developed and published during the last five years. CONCLUSION Several software tools developed by various research groups and made publicly available for automatic segmentation of the brain show variability in their results in several comparison studies and have not attained the level of reliability required for clinical studies. The machine learning models particularly three dimensional fully convolutional network models can provide a robust and efficient alternative with relation to publicly available tools but perform poorly on unseen datasets. The challenges related to training, computation cost, reproducibility, and validation across distinct scanning modalities for machine learning models need to be addressed.
Collapse
Affiliation(s)
| | - Krishna Kumar Singh
- Symbiosis Centre for Information
Technology, Hinjawadi, Pune, Maharashtra, India
| |
Collapse
|
144
|
Cao R, Pei X, Ge N, Zheng C. Clinical Target Volume Auto-Segmentation of Esophageal Cancer for Radiotherapy After Radical Surgery Based on Deep Learning. Technol Cancer Res Treat 2021; 20:15330338211034284. [PMID: 34387104 PMCID: PMC8366129 DOI: 10.1177/15330338211034284] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Radiotherapy plays an important role in controlling the local recurrence of esophageal cancer after radical surgery. Segmentation of the clinical target volume is a key step in radiotherapy treatment planning, but it is time-consuming and operator-dependent. This paper introduces a deep dilated convolutional U-network to achieve fast and accurate clinical target volume auto-segmentation of esophageal cancer after radical surgery. The deep dilated convolutional U-network, which integrates the advantages of dilated convolution and the U-network, is an end-to-end architecture that enables rapid training and testing. A dilated convolution module for extracting multiscale context features containing the original information on fine texture and boundaries is integrated into the U-network architecture to avoid information loss due to down-sampling and improve the segmentation accuracy. In addition, batch normalization is added to the deep dilated convolutional U-network for fast and stable convergence. In the present study, the training and validation loss tended to be stable after 40 training epochs. This deep dilated convolutional U-network model was able to segment the clinical target volume with an overall mean Dice similarity coefficient of 86.7% and a respective 95% Hausdorff distance of 37.4 mm, indicating reasonable volume overlap of the auto-segmented and manual contours. The mean Cohen kappa coefficient was 0.863, indicating that the deep dilated convolutional U-network was robust. Comparisons with the U-network and attention U-network showed that the overall performance of the deep dilated convolutional U-network was best for the Dice similarity coefficient, 95% Hausdorff distance, and Cohen kappa coefficient. The test time for segmentation of the clinical target volume was approximately 25 seconds per patient. This deep dilated convolutional U-network could be applied in the clinical setting to save time in delineation and improve the consistency of contouring.
Collapse
Affiliation(s)
- Ruifen Cao
- College of Computer Science and Technology, 12487Anhui University, Hefei, Anhui, China
- Engineering Research Center of Big Data Application in Private Health Medicine, Fujian Province University, Putian, Fujian, China
| | - Xi Pei
- 12652University of Science and Technology of China, Hefei, Anhui, China
| | - Ning Ge
- The First Affiliated Hospital of USTC West District, 117556Anhui Provincial Cancer Hospital, Hefei, Anhui, China
| | - Chunhou Zheng
- College of Computer Science and Technology, 12487Anhui University, Hefei, Anhui, China
- Engineering Research Center of Big Data Application in Private Health Medicine, Fujian Province University, Putian, Fujian, China
| |
Collapse
|
145
|
Bogovic JA, Otsuna H, Heinrich L, Ito M, Jeter J, Meissner G, Nern A, Colonell J, Malkesman O, Ito K, Saalfeld S. An unbiased template of the Drosophila brain and ventral nerve cord. PLoS One 2020; 15:e0236495. [PMID: 33382698 PMCID: PMC7774840 DOI: 10.1371/journal.pone.0236495] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Accepted: 07/07/2020] [Indexed: 12/03/2022] Open
Abstract
The fruit fly Drosophila melanogaster is an important model organism for neuroscience with a wide array of genetic tools that enable the mapping of individual neurons and neural subtypes. Brain templates are essential for comparative biological studies because they enable analyzing many individuals in a common reference space. Several central brain templates exist for Drosophila, but every one is either biased, uses sub-optimal tissue preparation, is imaged at low resolution, or does not account for artifacts. No publicly available Drosophila ventral nerve cord template currently exists. In this work, we created high-resolution templates of the Drosophila brain and ventral nerve cord using the best-available technologies for imaging, artifact correction, stitching, and template construction using groupwise registration. We evaluated our central brain template against the four most competitive, publicly available brain templates and demonstrate that ours enables more accurate registration with fewer local deformations in shorter time.
Collapse
Affiliation(s)
- John A. Bogovic
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Hideo Otsuna
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Larissa Heinrich
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Masayoshi Ito
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Jennifer Jeter
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Geoffrey Meissner
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Aljoscha Nern
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Jennifer Colonell
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Oz Malkesman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| | - Kei Ito
- Institute of Zoology, University of Cologne, Germany
| | - Stephan Saalfeld
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, Virginia, United States of America
| |
Collapse
|
146
|
Lee M, Kim J, EY Kim R, Kim HG, Oh SW, Lee MK, Wang SM, Kim NY, Kang DW, Rieu Z, Yong JH, Kim D, Lim HK. Split-Attention U-Net: A Fully Convolutional Network for Robust Multi-Label Segmentation from Brain MRI. Brain Sci 2020; 10:E974. [PMID: 33322640 PMCID: PMC7764312 DOI: 10.3390/brainsci10120974] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 11/30/2020] [Accepted: 12/07/2020] [Indexed: 02/03/2023] Open
Abstract
Multi-label brain segmentation from brain magnetic resonance imaging (MRI) provides valuable structural information for most neurological analyses. Due to the complexity of the brain segmentation algorithm, it could delay the delivery of neuroimaging findings. Therefore, we introduce Split-Attention U-Net (SAU-Net), a convolutional neural network with skip pathways and a split-attention module that segments brain MRI scans. The proposed architecture employs split-attention blocks, skip pathways with pyramid levels, and evolving normalization layers. For efficient training, we performed pre-training and fine-tuning with the original and manually modified FreeSurfer labels, respectively. This learning strategy enables involvement of heterogeneous neuroimaging data in the training without the need for many manual annotations. Using nine evaluation datasets, we demonstrated that SAU-Net achieved better segmentation accuracy with better reliability that surpasses those of state-of-the-art methods. We believe that SAU-Net has excellent potential due to its robustness to neuroanatomical variability that would enable almost instantaneous access to accurate neuroimaging biomarkers and its swift processing runtime compared to other methods investigated.
Collapse
Affiliation(s)
- Minho Lee
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
| | - JeeYoung Kim
- Department of Radiology, Eunpyeong St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 03312, Korea; (J.K.); (H.G.K.); (S.W.O.)
| | - Regina EY Kim
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
- Institute of Human Genomic Study, College of Medicine, Korea University, Ansan 15355, Korea
- Department of Psychiatry, University of Iowa, Iowa City, IA 52242, USA
| | - Hyun Gi Kim
- Department of Radiology, Eunpyeong St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 03312, Korea; (J.K.); (H.G.K.); (S.W.O.)
| | - Se Won Oh
- Department of Radiology, Eunpyeong St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 03312, Korea; (J.K.); (H.G.K.); (S.W.O.)
| | - Min Kyoung Lee
- Department of Radiology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea;
| | - Sheng-Min Wang
- Department of Psychiatry, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea; (S.-M.W.); (N.-Y.K.)
| | - Nak-Young Kim
- Department of Psychiatry, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea; (S.-M.W.); (N.-Y.K.)
| | - Dong Woo Kang
- Department of Psychiatry, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea;
| | - ZunHyan Rieu
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
| | - Jung Hyun Yong
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
| | - Donghyeon Kim
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
| | - Hyun Kook Lim
- Department of Psychiatry, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea; (S.-M.W.); (N.-Y.K.)
| |
Collapse
|
147
|
Zhang YD, Dong Z, Wang SH, Yu X, Yao X, Zhou Q, Hu H, Li M, Jiménez-Mesa C, Ramirez J, Martinez FJ, Gorriz JM. Advances in multimodal data fusion in neuroimaging: Overview, challenges, and novel orientation. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2020; 64:149-187. [PMID: 32834795 PMCID: PMC7366126 DOI: 10.1016/j.inffus.2020.07.006] [Citation(s) in RCA: 132] [Impact Index Per Article: 26.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 07/06/2020] [Accepted: 07/14/2020] [Indexed: 05/13/2023]
Abstract
Multimodal fusion in neuroimaging combines data from multiple imaging modalities to overcome the fundamental limitations of individual modalities. Neuroimaging fusion can achieve higher temporal and spatial resolution, enhance contrast, correct imaging distortions, and bridge physiological and cognitive information. In this study, we analyzed over 450 references from PubMed, Google Scholar, IEEE, ScienceDirect, Web of Science, and various sources published from 1978 to 2020. We provide a review that encompasses (1) an overview of current challenges in multimodal fusion (2) the current medical applications of fusion for specific neurological diseases, (3) strengths and limitations of available imaging modalities, (4) fundamental fusion rules, (5) fusion quality assessment methods, and (6) the applications of fusion for atlas-based segmentation and quantification. Overall, multimodal fusion shows significant benefits in clinical diagnosis and neuroscience research. Widespread education and further research amongst engineers, researchers and clinicians will benefit the field of multimodal neuroimaging.
Collapse
Affiliation(s)
- Yu-Dong Zhang
- School of Informatics, University of Leicester, Leicester, LE1 7RH, Leicestershire, UK
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Zhengchao Dong
- Department of Psychiatry, Columbia University, USA
- New York State Psychiatric Institute, New York, NY 10032, USA
| | - Shui-Hua Wang
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- School of Architecture Building and Civil engineering, Loughborough University, Loughborough, LE11 3TU, UK
- School of Mathematics and Actuarial Science, University of Leicester, LE1 7RH, UK
| | - Xiang Yu
- School of Informatics, University of Leicester, Leicester, LE1 7RH, Leicestershire, UK
| | - Xujing Yao
- School of Informatics, University of Leicester, Leicester, LE1 7RH, Leicestershire, UK
| | - Qinghua Zhou
- School of Informatics, University of Leicester, Leicester, LE1 7RH, Leicestershire, UK
| | - Hua Hu
- Department of Psychiatry, Columbia University, USA
- Department of Neurology, The Second Affiliated Hospital of Soochow University, China
| | - Min Li
- Department of Psychiatry, Columbia University, USA
- School of Internet of Things, Hohai University, Changzhou, China
| | - Carmen Jiménez-Mesa
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
| | - Javier Ramirez
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
| | - Francisco J Martinez
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
| | - Juan Manuel Gorriz
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
- Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK
| |
Collapse
|
148
|
Chi W, Ma L, Wu J, Chen M, Lu W, Gu X. Deep learning-based medical image segmentation with limited labels. Phys Med Biol 2020; 65:10.1088/1361-6560/abc363. [PMID: 33086205 PMCID: PMC8058113 DOI: 10.1088/1361-6560/abc363] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 10/21/2020] [Indexed: 12/18/2022]
Abstract
Deep learning (DL)-based auto-segmentation has the potential for accurate organ delineation in radiotherapy applications but requires large amounts of clean labeled data to train a robust model. However, annotating medical images is extremely time-consuming and requires clinical expertise, especially for segmentation that demands voxel-wise labels. On the other hand, medical images without annotations are abundant and highly accessible. To alleviate the influence of the limited number of clean labels, we propose a weakly supervised DL training approach using deformable image registration (DIR)-based annotations, leveraging the abundance of unlabeled data. We generate pseudo-contours by utilizing DIR to propagate atlas contours onto abundant unlabeled images and train a robust DL-based segmentation model. With 10 labeled TCIA dataset and 50 unlabeled CT scans from our institution, our model achieved Dice similarity coefficient of 87.9%, 73.4%, 73.4%, 63.2% and 61.0% on mandible, left & right parotid glands and left & right submandibular glands of TCIA test set and competitive performance on our institutional clinical dataset and a third party (PDDCA) dataset. Experimental results demonstrated the proposed method outperformed traditional multi-atlas DIR methods and fully supervised limited data training and is promising for DL-based medical image segmentation application with limited annotated data.
Collapse
Affiliation(s)
- Weicheng Chi
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, People's Republic of China
| | - Lin Ma
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Junjie Wu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Mingli Chen
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Weiguo Lu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Xuejun Gu
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| |
Collapse
|
149
|
Abstract
Segmentation of medical images using multiple atlases has recently gained immense attention due to their augmented robustness against variabilities across different subjects. These atlas-based methods typically comprise of three steps: atlas selection, image registration, and finally label fusion. Image registration is one of the core steps in this process, accuracy of which directly affects the final labeling performance. However, due to inter-subject anatomical variations, registration errors are inevitable. The aim of this paper is to develop a deep learning-based confidence estimation method to alleviate the potential effects of registration errors. We first propose a fully convolutional network (FCN) with residual connections to learn the relationship between the image patch pair (i.e., patches from the target subject and the atlas) and the related label confidence patch. With the obtained label confidence patch, we can identify the potential errors in the warped atlas labels and correct them. Then, we use two label fusion methods to fuse the corrected atlas labels. The proposed methods are validated on a publicly available dataset for hippocampus segmentation. Experimental results demonstrate that our proposed methods outperform the state-of-the-art segmentation methods.
Collapse
Affiliation(s)
- Hancan Zhu
- School of Mathematics Physics and Information, Shaoxing University, Shaoxing, 312000, China
| | - Ehsan Adeli
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, 94305, CA, USA
| | - Feng Shi
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, 27599, North Carolina, USA.
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea.
| |
Collapse
|
150
|
Küstner T, Hepp T, Fischer M, Schwartz M, Fritsche A, Häring HU, Nikolaou K, Bamberg F, Yang B, Schick F, Gatidis S, Machann J. Fully Automated and Standardized Segmentation of Adipose Tissue Compartments via Deep Learning in 3D Whole-Body MRI of Epidemiologic Cohort Studies. Radiol Artif Intell 2020; 2:e200010. [PMID: 33937847 PMCID: PMC8082356 DOI: 10.1148/ryai.2020200010] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 06/02/2020] [Accepted: 06/26/2020] [Indexed: 04/28/2023]
Abstract
PURPOSE To enable fast and reliable assessment of subcutaneous and visceral adipose tissue compartments derived from whole-body MRI. MATERIALS AND METHODS Quantification and localization of different adipose tissue compartments derived from whole-body MR images is of high interest in research concerning metabolic conditions. For correct identification and phenotyping of individuals at increased risk for metabolic diseases, a reliable automated segmentation of adipose tissue into subcutaneous and visceral adipose tissue is required. In this work, a three-dimensional (3D) densely connected convolutional neural network (DCNet) is proposed to provide robust and objective segmentation. In this retrospective study, 1000 cases (average age, 66 years ± 13 [standard deviation]; 523 women) from the Tuebingen Family Study database and the German Center for Diabetes research database and 300 cases (average age, 53 years ± 11; 152 women) from the German National Cohort (NAKO) database were collected for model training, validation, and testing, with transfer learning between the cohorts. These datasets included variable imaging sequences, imaging contrasts, receiver coil arrangements, scanners, and imaging field strengths. The proposed DCNet was compared to a similar 3D U-Net segmentation in terms of sensitivity, specificity, precision, accuracy, and Dice overlap. RESULTS Fast (range, 5-7 seconds) and reliable adipose tissue segmentation can be performed with high Dice overlap (0.94), sensitivity (96.6%), specificity (95.1%), precision (92.1%), and accuracy (98.4%) from 3D whole-body MRI datasets (field of view coverage, 450 × 450 × 2000 mm). Segmentation masks and adipose tissue profiles are automatically reported back to the referring physician. CONCLUSION Automated adipose tissue segmentation is feasible in 3D whole-body MRI datasets and is generalizable to different epidemiologic cohort studies with the proposed DCNet.Supplemental material is available for this article.© RSNA, 2020.
Collapse
|