1
|
Fontana C, Cappetti N. A novel procedure for medial axis reconstruction of vessels from Medical Imaging segmentation. Heliyon 2024; 10:e31769. [PMID: 38845885 PMCID: PMC11153195 DOI: 10.1016/j.heliyon.2024.e31769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 05/09/2024] [Accepted: 05/21/2024] [Indexed: 06/09/2024] Open
Abstract
A procedure for reconstructing the central axis from diagnostic image processing is presented here, capable of solving the widespread problem of stepped shape effect that characterizes the most common algorithmic tools for processing the central axis for diagnostic imaging applications through the development of an algorithm correcting the spatial coordinates of each point belonging to the axis from the use of a common discrete image skeleton algorithm. The procedure is applied to the central axis traversing the vascular branch of the cerebral system, appropriately reconstructed from the processing of diagnostic images, using investigations of the local intensity values identified in adjacent voxels. The percentage intensity of the degree of adherence to a specific anatomical tissue acts as an attraction pole in the identification of the spatial center on which to place each point of the skeleton crossing the investigated anatomical structure. The results were shown in terms of the number of vessels identified overall compared to the original reference model. The procedure demonstrates high accuracy margin in the correction of the local coordinates of the central points that permits to allocate precise dimensional measurement of the anatomy under examination. The reconstruction of a central axis effectively centered in the region under examination represents a fundamental starting point in deducing, with a high margin of accuracy, key informations of a geometric and dimensional nature that favours the recognition of phenomena of shape alterations ascribable to the presence of clinical pathologies.
Collapse
Affiliation(s)
- C. Fontana
- Department of Industrial Engineering, University of Salerno, Fisciano, SA, 84084, Italy
| | - N. Cappetti
- Department of Industrial Engineering, University of Salerno, Fisciano, SA, 84084, Italy
| |
Collapse
|
2
|
Jin R, Cai Y, Zhang S, Yang T, Feng H, Jiang H, Zhang X, Hu Y, Liu J. Computational approaches for the reconstruction of optic nerve fibers along the visual pathway from medical images: a comprehensive review. Front Neurosci 2023; 17:1191999. [PMID: 37304011 PMCID: PMC10250625 DOI: 10.3389/fnins.2023.1191999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 05/09/2023] [Indexed: 06/13/2023] Open
Abstract
Optic never fibers in the visual pathway play significant roles in vision formation. Damages of optic nerve fibers are biomarkers for the diagnosis of various ophthalmological and neurological diseases; also, there is a need to prevent the optic nerve fibers from getting damaged in neurosurgery and radiation therapy. Reconstruction of optic nerve fibers from medical images can facilitate all these clinical applications. Although many computational methods are developed for the reconstruction of optic nerve fibers, a comprehensive review of these methods is still lacking. This paper described both the two strategies for optic nerve fiber reconstruction applied in existing studies, i.e., image segmentation and fiber tracking. In comparison to image segmentation, fiber tracking can delineate more detailed structures of optic nerve fibers. For each strategy, both conventional and AI-based approaches were introduced, and the latter usually demonstrates better performance than the former. From the review, we concluded that AI-based methods are the trend for optic nerve fiber reconstruction and some new techniques like generative AI can help address the current challenges in optic nerve fiber reconstruction.
Collapse
Affiliation(s)
- Richu Jin
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, China
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Yongning Cai
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, China
| | - Shiyang Zhang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Ting Yang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Haibo Feng
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Hongyang Jiang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Xiaoqing Zhang
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Yan Hu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Jiang Liu
- Research Institute of Trustworthy Autonomous Systems, Southern University of Science and Technology, Shenzhen, China
- Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
- Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
| |
Collapse
|
3
|
Xie L, Huang J, Yu J, Zeng Q, Hu Q, Chen Z, Xie G, Feng Y. CNTSeg: A multimodal deep-learning-based network for cranial nerves tract segmentation. Med Image Anal 2023; 86:102766. [PMID: 36812693 DOI: 10.1016/j.media.2023.102766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 09/21/2022] [Accepted: 02/08/2023] [Indexed: 02/12/2023]
Abstract
The segmentation of cranial nerves (CNs) tracts based on diffusion magnetic resonance imaging (dMRI) provides a valuable quantitative tool for the analysis of the morphology and course of individual CNs. Tractography-based approaches can describe and analyze the anatomical area of CNs by selecting the reference streamlines in combination with ROIs-based (regions-of-interests) or clustering-based. However, due to the slender structure of CNs and the complex anatomical environment, single-modality data based on dMRI cannot provide a complete and accurate description, resulting in low accuracy or even failure of current algorithms in performing individualized CNs segmentation. In this work, we propose a novel multimodal deep-learning-based multi-class network for automated cranial nerves tract segmentation without using tractography, ROI placement or clustering, called CNTSeg. Specifically, we introduced T1w images, fractional anisotropy (FA) images, and fiber orientation distribution function (fODF) peaks into the training data set, and design the back-end fusion module which uses the complementary information of the interphase feature fusion to improve the segmentation performance. CNTSeg has achieved the segmentation of 5 pairs of CNs (i.e. optic nerve CN II, oculomotor nerve CN III, trigeminal nerve CN V, and facial-vestibulocochlear nerve CN VII/VIII). Extensive comparisons and ablation experiments show promising results and are anatomically convincing even for difficult tracts. The code will be openly available at https://github.com/IPIS-XieLei/CNTSeg.
Collapse
Affiliation(s)
- Lei Xie
- Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China.
| | - Jiahao Huang
- Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
| | - Jiangli Yu
- Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
| | - Qingrun Zeng
- Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
| | - Qiming Hu
- Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
| | - Zan Chen
- Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China; Zhejiang Provincial United Key Laboratory of Embedded Systems, Hangzhou 310023, China
| | - Guoqiang Xie
- Nuclear Industry 215 Hospital of Shaanxi Province, Xianyang, 712000, China.
| | - Yuanjing Feng
- Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China; Zhejiang Provincial United Key Laboratory of Embedded Systems, Hangzhou 310023, China.
| |
Collapse
|
4
|
Feng Y, Chow LS, Gowdh NM, Ramli N, Tan LK, Abdullah S, Tiang SS. Gradient-based edge detection with skeletonization (GES) segmentation for magnetic resonance optic nerve images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
5
|
Yang Y, Huang R, Lv G, Hu Z, Shan G, Zhang J, Bai X, Liu P, Li H, Chen M. Automatic segmentation of the clinical target volume and organs at risk for rectal cancer radiotherapy using structure-contextual representations based on 3D high-resolution network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
6
|
Wang H, Xian M, Vakanski A. TA-Net: Topology-Aware Network for Gland Segmentation. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION 2022; 2022:3241-3249. [PMID: 35509894 PMCID: PMC9063467 DOI: 10.1109/wacv51458.2022.00330] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Gland segmentation is a critical step to quantitatively assess the morphology of glands in histopathology image analysis. However, it is challenging to separate densely clustered glands accurately. Existing deep learning-based approaches attempted to use contour-based techniques to alleviate this issue but only achieved limited success. To address this challenge, we propose a novel topology-aware network (TA-Net) to accurately separate densely clustered and severely deformed glands. The proposed TA-Net has a multitask learning architecture and enhances the generalization of gland segmentation by learning shared representation from two tasks: instance segmentation and gland topology estimation. The proposed topology loss computes gland topology using gland skeletons and markers. It drives the network to generate segmentation results that comply with the true gland topology. We validate the proposed approach on the GlaS and CRAG datasets using three quantitative metrics, F1-score, object-level Dice coefficient, and object-level Hausdorff distance. Extensive experiments demonstrate that TA-Net achieves state-of-the-art performance on the two datasets. TA-Net outperforms other approaches in the presence of densely clustered glands.
Collapse
|
7
|
Bosler NSI, Ashton D, Neely AJ, Lueck CJ. Variation in the Anatomy of the Normal Human Optic Chiasm: An MRI Study. J Neuroophthalmol 2021; 41:194-199. [PMID: 32141976 DOI: 10.1097/wno.0000000000000907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
BACKGROUND Compression of the optic chiasm typically leads to bitemporal hemianopia. This implies that decussating nasal fibers are selectively affected, but the precise mechanism is unclear. Stress on nasal fibers has been investigated using finite element modeling but requires accurate anatomical data to generate a meaningful output. The precise shape of the chiasm is unclear: A recent photomicrographic study suggested that nasal fibers decussate paracentrally and run parallel to each other in the central arm of an "H." This study aimed to determine the population variation in chiasmal shape to inform future models. METHODS Sequential MRI scans of 68 healthy individuals were selected. 2D images of each chiasm were created and analyzed to determine the angle of elevation of the chiasm, the width of the chiasm, and the offset between the points of intersection of lines drawn down the centers of the optic nerves and contralateral optic tracts. RESULTS The mean width of the chiasm was 12.0 ± 1.5 mm (SD), and the mean offset was 4.7 ± 1.4 mm generating a mean offset:width ratio of 0.38 ± 0.09. No chiasm had an offset of zero. The mean incident angle of optic nerves was 56 ± 7°, and for optic tracts, it was 51 ± 7°. CONCLUSIONS The human optic chiasm is "H" shaped, not "X" shaped. The findings are consistent with nasal fibers decussating an average of 2.4 mm lateral to the midline before travelling in parallel across the midline. This information will inform future models of chiasmal compression.
Collapse
Affiliation(s)
- Nicholas S I Bosler
- Australian National University Medical School (NSIB, DA, CJL), Canberra, Australia ; Departments of Neurology (NSIB, CJL) and Radiology (DA), The Canberra Hospital, Canberra, Australia; and School of Engineering and Information Technology (AJN), University of New South Wales, Canberra, Australia
| | | | | | | |
Collapse
|
8
|
Chow LS, Paley MNJ. Recent advances on optic nerve magnetic resonance imaging and post-processing. Magn Reson Imaging 2021; 79:76-84. [PMID: 33753137 DOI: 10.1016/j.mri.2021.03.014] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 03/17/2021] [Accepted: 03/17/2021] [Indexed: 11/27/2022]
Abstract
The optic nerve is known to be one of the largest nerve bundles in the human central nervous system. There have been many studies of optic nerve imaging and post-processing that have provided insights into pathophysiology of optic neuritis related to multiple sclerosis and neuromyelitis optica spectrum disorder, glaucoma, and Leber's hereditary optic neuropathy. There are many challenges in optic nerve imaging, due to the morphology of the nerve through its course to the optic chiasm, its mobility due to eye movements and the high signal from cerebrospinal fluid and orbital fat surrounding the optic nerve. Recently, many advanced and fast imaging sequences have been used with post-processing techniques in attempts to produce higher resolution images of the optic nerve for evaluating various diseases. Magnetic resonance imaging (MRI) is one of the most common imaging methodologies for the optic nerve. This review paper will focus on recent MRI advances in optic nerve imaging and explain several post-processing techniques being used for analysis of optic nerve images. Finally, some challenges and potential for future optic nerve studies will be discussed.
Collapse
Affiliation(s)
- Li Sze Chow
- Department of Electrical and Electronic Engineering, Faculty of Engineering and Built Environment, UCSI University, 1, Jalan Puncak Menara Gading, Taman Connaught, 56000 Cheras, Kuala Lumpur, Malaysia.
| | - Martyn N J Paley
- Department of Infection, Immunity and Cardiovascular Disease, The Medical School, The University of Sheffield, Beech Hill Road, Sheffield S10 2RX, UK.
| |
Collapse
|
9
|
Liu Y, Gu X. Evaluation and comparison of global-feature-based and local-feature-based segmentation algorithms in intracranial visual pathway delineation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1766-1769. [PMID: 33018340 DOI: 10.1109/embc44109.2020.9175937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Intracranial visual pathway is related to the effective transmission of visual signals to brain. It was not only the target organ of diseases but also the organs at risk in radiotherapy thus its delineation plays an important role in both diagnosis and treatment planning. Traditional manual segmentation method suffered from time- and labor- consuming as well as intra- and inter- variability. In order to overcome these problems, state-of-the-art segmentation models were designed and various features were extracted and utilized, but it's hard to tell their effectiveness on intracranial visual pathway delineation. It's because that these methods worked on different dataset and accompanied with different training tricks. This study aimed to research the contribution of global features and local features in delineating the intracranial visual pathway from MRI scans. The two typical segmentation models, 3D UNet and DeepMedic, were chosen since they focused on global features and local features respectively. We constructed the hybrid model through serially connecting the two mentioned models to validate the performance of combined global and local features. Validation results showed that the hybrid model outperformed the individual ones. It proved that multi scale feature fusion was important in improving the segmentation performance.
Collapse
|
10
|
Ai D, Zhao Z, Fan J, Song H, Qu X, Xian J, Yang J. Spatial probabilistic distribution map-based two-channel 3D U-net for visual pathway segmentation. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2020.09.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
11
|
Tor-Diez C, Porras AR, Packer RJ, Avery RA, Linguraru MG. Unsupervised MRI Homogenization: Application to Pediatric Anterior Visual Pathway Segmentation. ACTA ACUST UNITED AC 2020; 12436:180-188. [PMID: 34327515 DOI: 10.1007/978-3-030-59861-7_19] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
Deep learning strategies have become ubiquitous optimization tools for medical image analysis. With the appropriate amount of data, these approaches outperform classic methodologies in a variety of image processing tasks. However, rare diseases and pediatric imaging often lack extensive data. Specially, MRI are uncommon because they require sedation in young children. Moreover, the lack of standardization in MRI protocols introduces a strong variability between different datasets. In this paper, we present a general deep learning architecture for MRI homogenization that also provides the segmentation map of an anatomical region of interest. Homogenization is achieved using an unsupervised architecture based on variational autoencoder with cycle generative adversarial networks, which learns a common space (i.e. a representation of the optimal imaging protocol) using an unpaired image-to-image translation network. The segmentation is simultaneously generated by a supervised learning strategy. We evaluated our method segmenting the challenging anterior visual pathway using three brain T1-weighted MRI datasets (variable protocols and vendors). Our method significantly outperformed a non-homogenized multi-protocol U-Net.
Collapse
Affiliation(s)
- Carlos Tor-Diez
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC 20010, USA
| | - Antonio R Porras
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC 20010, USA
| | - Roger J Packer
- Center for Neuroscience & Behavioral Health, Children's National Hospital, Washington, DC 20010, USA
- Gilbert Neurofibromatosis Institute, Children's National Hospital, Washington, DC 20010, USA
| | - Robert A Avery
- Division of Pediatric Ophthalmology, Children's Hospital of Philadelphia, Philadelphia, PA 19104, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC 20010, USA
- School of Medicine and Health Sciences, George Washington University, Washington, DC 20037, USA
| |
Collapse
|
12
|
Vrtovec T, Močnik D, Strojan P, Pernuš F, Ibragimov B. Auto-segmentation of organs at risk for head and neck radiotherapy planning: From atlas-based to deep learning methods. Med Phys 2020; 47:e929-e950. [PMID: 32510603 DOI: 10.1002/mp.14320] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2019] [Revised: 05/27/2020] [Accepted: 05/29/2020] [Indexed: 02/06/2023] Open
Abstract
Radiotherapy (RT) is one of the basic treatment modalities for cancer of the head and neck (H&N), which requires a precise spatial description of the target volumes and organs at risk (OARs) to deliver a highly conformal radiation dose to the tumor cells while sparing the healthy tissues. For this purpose, target volumes and OARs have to be delineated and segmented from medical images. As manual delineation is a tedious and time-consuming task subjected to intra/interobserver variability, computerized auto-segmentation has been developed as an alternative. The field of medical imaging and RT planning has experienced an increased interest in the past decade, with new emerging trends that shifted the field of H&N OAR auto-segmentation from atlas-based to deep learning-based approaches. In this review, we systematically analyzed 78 relevant publications on auto-segmentation of OARs in the H&N region from 2008 to date, and provided critical discussions and recommendations from various perspectives: image modality - both computed tomography and magnetic resonance image modalities are being exploited, but the potential of the latter should be explored more in the future; OAR - the spinal cord, brainstem, and major salivary glands are the most studied OARs, but additional experiments should be conducted for several less studied soft tissue structures; image database - several image databases with the corresponding ground truth are currently available for methodology evaluation, but should be augmented with data from multiple observers and multiple institutions; methodology - current methods have shifted from atlas-based to deep learning auto-segmentation, which is expected to become even more sophisticated; ground truth - delineation guidelines should be followed and participation of multiple experts from multiple institutions is recommended; performance metrics - the Dice coefficient as the standard volumetric overlap metrics should be accompanied with at least one distance metrics, and combined with clinical acceptability scores and risk assessments; segmentation performance - the best performing methods achieve clinically acceptable auto-segmentation for several OARs, however, the dosimetric impact should be also studied to provide clinically relevant endpoints for RT planning.
Collapse
Affiliation(s)
- Tomaž Vrtovec
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Domen Močnik
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Primož Strojan
- Institute of Oncology Ljubljana, Zaloška cesta 2, Ljubljana, SI-1000, Slovenia
| | - Franjo Pernuš
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia
| | - Bulat Ibragimov
- Faculty Electrical Engineering, University of Ljubljana, Tržaška cesta 25, Ljubljana, SI-1000, Slovenia.,Department of Computer Science, University of Copenhagen, Universitetsparken 1, Copenhagen, D-2100, Denmark
| |
Collapse
|
13
|
Zhong T, Huang X, Tang F, Liang S, Deng X, Zhang Y. Boosting-based Cascaded Convolutional Neural Networks for the Segmentation of CT Organs-at-risk in Nasopharyngeal Carcinoma. Med Phys 2019; 46:5602-5611. [PMID: 31529501 DOI: 10.1002/mp.13825] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Accepted: 09/04/2019] [Indexed: 12/17/2022] Open
Abstract
PURPOSE Accurately segmenting organs-at-risk (OARs) is a key step in the effective planning of radiation therapy for nasopharyngeal carcinoma (NPC) treatment. In OAR segmentation of the head and neck CT, the low contrast and surrounding adhesion tissues of the parotids, thyroids, and optic nerves result in the difficulty in segmentation and lower accuracy of automatic segmentation for these organs than the other organs. In this paper, we propose a cascaded network structure to delineate these three OARs for NPC radiotherapy by combining deep learning and Boosting algorithm. MATERIALS AND METHODS The CT images of 140 NPC patients treated with radiotherapy were collected, and each of the three OAR annotations was respectively delineated by an experienced rater and reviewed by a professional radiologist (with 10 years of experience). The datasets (140 patients) were divided into a training set (100 patients), a validation set (20 patients), and a test set (20 patients). From the Boosting method for combining multiple classifiers, three cascaded CNNs for segmentation were combined. The first network was trained with the traditional approach. The second one was trained on patterns (pixels) filtered by the first net. That is, the second machine recognized a mix of patterns (pixels), 50% of which was accurately identified by the first net. Finally, the third net was trained on the new patterns (pixels) screened jointly by the first and second networks. During the test, the outputs of the three nets were considered to obtain the final output. Dice similarity coefficient (DSC), 95th percentile of the Hausdorff distance (95% HD), and volume overlap error (VOE) were used to assess the method performance. RESULTS The mean DSC (%) values were above 92.26 for the parotids, above 92.29 for the thyroids, and above 89.37 for the optic nerves. The mean 95% HDs (mm) were approximately 3.08 for the parotids, 2.64 for the thyroids, and 2.03 for the optic nerves. The mean VOE (%) values were approximately 14.16 for the parotids, 14.94 for the thyroids, and 19.07 for the optic nerves. CONCLUSION The proposed cascaded deep learning structure could achieve high performance compared with existing single-network or other segmentation algorithms.
Collapse
Affiliation(s)
- Tao Zhong
- School of Biomedical Engineering, Southern Medical University, 1838 North Guangzhou Avenue, Guangzhou, 510515, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1838 North Guangzhou Avenue, Guangzhou, 510515, Guangdong, China
| | - Xia Huang
- School of Biomedical Engineering, Southern Medical University, 1838 North Guangzhou Avenue, Guangzhou, 510515, Guangdong, China
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, 1838 North Guangzhou Avenue, Guangzhou, 510515, Guangdong, China
| | - Fan Tang
- School of Biomedical Engineering, Southern Medical University, 1838 North Guangzhou Avenue, Guangzhou, 510515, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1838 North Guangzhou Avenue, Guangzhou, 510515, Guangdong, China
- Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, 1838 North Guangzhou Avenue, Guangzhou, 510515, Guangdong, China
| | - Shujun Liang
- School of Biomedical Engineering, Southern Medical University, 1838 North Guangzhou Avenue, Guangzhou, 510515, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1838 North Guangzhou Avenue, Guangzhou, 510515, Guangdong, China
| | - Xiaogang Deng
- Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, 1838 North Guangzhou Avenue, Guangzhou, 510515, Guangdong, China
| | - Yu Zhang
- School of Biomedical Engineering, Southern Medical University, 1838 North Guangzhou Avenue, Guangzhou, 510515, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1838 North Guangzhou Avenue, Guangzhou, 510515, Guangdong, China
| |
Collapse
|
14
|
Agn M, Munck Af Rosenschöld P, Puonti O, Lundemann MJ, Mancini L, Papadaki A, Thust S, Ashburner J, Law I, Van Leemput K. A modality-adaptive method for segmenting brain tumors and organs-at-risk in radiation therapy planning. Med Image Anal 2019; 54:220-237. [PMID: 30952038 PMCID: PMC6554451 DOI: 10.1016/j.media.2019.03.005] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 03/14/2019] [Accepted: 03/21/2019] [Indexed: 12/25/2022]
Abstract
In this paper we present a method for simultaneously segmenting brain tumors and an extensive set of organs-at-risk for radiation therapy planning of glioblastomas. The method combines a contrast-adaptive generative model for whole-brain segmentation with a new spatial regularization model of tumor shape using convolutional restricted Boltzmann machines. We demonstrate experimentally that the method is able to adapt to image acquisitions that differ substantially from any available training data, ensuring its applicability across treatment sites; that its tumor segmentation accuracy is comparable to that of the current state of the art; and that it captures most organs-at-risk sufficiently well for radiation therapy planning purposes. The proposed method may be a valuable step towards automating the delineation of brain tumors and organs-at-risk in glioblastoma patients undergoing radiation therapy.
Collapse
Affiliation(s)
- Mikael Agn
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark.
| | - Per Munck Af Rosenschöld
- Radiation Physics, Department of Hematology, Oncology and Radiation Physics, Skåne University Hospital, Lund, Sweden
| | - Oula Puonti
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre, Denmark
| | - Michael J Lundemann
- Department of Oncology, Copenhagen University Hospital Rigshospitalet, Denmark
| | - Laura Mancini
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - Anastasia Papadaki
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - Steffi Thust
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - John Ashburner
- Wellcome Centre for Human Neuroimaging, UCL Institute of Neurology, University College London, UK
| | - Ian Law
- Department of Clinical Physiology, Nuclear Medicine and PET, Copenhagen University Hospital Rigshospitalet, Denmark
| | - Koen Van Leemput
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA
| |
Collapse
|
15
|
Toward an automatic preoperative pipeline for image-guided temporal bone surgery. Int J Comput Assist Radiol Surg 2019; 14:967-976. [DOI: 10.1007/s11548-019-01937-x] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 03/05/2019] [Indexed: 11/26/2022]
|
16
|
Sultana S, Blatt JE, Gilles B, Rashid T, Audette MA. MRI-Based Medial Axis Extraction and Boundary Segmentation of Cranial Nerves Through Discrete Deformable 3D Contour and Surface Models. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1711-1721. [PMID: 28422682 DOI: 10.1109/tmi.2017.2693182] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This paper presents a segmentation technique to identify the medial axis and the boundary of cranial nerves. We utilize a 3-D deformable one-simplex discrete contour model to extract the medial axis of each cranial nerve. This contour model represents a collection of two-connected vertices linked by edges, where vertex position is determined by a Newtonian expression for vertex kinematics featuring internal and external forces, the latter of which include attractive forces toward the nerve medial axis. We exploit multiscale vesselness filtering and minimal path techniques in the medial axis extraction method, which also computes a radius estimate along the path. Once we have the medial axis and the radius function of a nerve, we identify the nerve surface using a two-simplex deformable model, which expands radially and can accommodate any nerve shape. As a result, the method proposed here combines the benefits of explicit contour and surface models, while also achieving a cornerstone for future work that will emphasize shape statistics, static collision with other critical structures, and tree-shape analysis.
Collapse
|
17
|
Aghdasi N, Li Y, Berens A, Harbison RA, Moe KS, Hannaford B. Efficient orbital structures segmentation with prior anatomical knowledge. J Med Imaging (Bellingham) 2017; 4:034501. [PMID: 28744478 DOI: 10.1117/1.jmi.4.3.034501] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2017] [Accepted: 06/22/2017] [Indexed: 11/14/2022] Open
Abstract
We present a fully automatic method for segmenting orbital structures (globes, optic nerves, and extraocular muscles) in CT images. Prior anatomical knowledge, such as shape, intensity, and spatial relationships of organs and landmarks, were utilized to define a volume of interest (VOI) that contains the desired structures. Then, VOI was used for fast localization and successful segmentation of each structure using predefined rules. Testing our method with 30 publicly available datasets, the average Dice similarity coefficient for right and left sides of [0.81, 0.79] eye globes, [0.72, 0.79] optic nerves, and [0.73, 0.76] extraocular muscles were achieved. The proposed method is accurate, efficient, does not require training data, and its intuitive pipeline allows the user to modify or extend to other structures.
Collapse
Affiliation(s)
- Nava Aghdasi
- University of Washington, Department of Electrical Engineering, Seattle, Washington, United States
| | - Yangming Li
- University of Washington, Department of Electrical Engineering, Seattle, Washington, United States
| | - Angelique Berens
- University of Washington, Department of Otolaryngology, Head and Neck Surgery, Seattle, Washington, United States
| | - Richard A Harbison
- University of Washington, Department of Otolaryngology, Head and Neck Surgery, Seattle, Washington, United States
| | - Kris S Moe
- University of Washington, Department of Otolaryngology, Head and Neck Surgery, Seattle, Washington, United States
| | - Blake Hannaford
- University of Washington, Department of Electrical Engineering, Seattle, Washington, United States
| |
Collapse
|
18
|
Pirlich M, Tittmann M, Franz D, Dietz A, Hofer M. An observational, prospective study to evaluate the preoperative planning tool “CI-Wizard” for cochlear implant surgery. Eur Arch Otorhinolaryngol 2016; 274:685-694. [DOI: 10.1007/s00405-016-4286-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Accepted: 08/24/2016] [Indexed: 10/21/2022]
|
19
|
Mansoor A, Cerrolaza JJ, Idrees R, Biggs E, Alsharid MA, Avery RA, Linguraru MG. Deep Learning Guided Partitioned Shape Model for Anterior Visual Pathway Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1856-65. [PMID: 26930677 DOI: 10.1109/tmi.2016.2535222] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Analysis of cranial nerve systems, such as the anterior visual pathway (AVP), from MRI sequences is challenging due to their thin long architecture, structural variations along the path, and low contrast with adjacent anatomic structures. Segmentation of a pathologic AVP (e.g., with low-grade gliomas) poses additional challenges. In this work, we propose a fully automated partitioned shape model segmentation mechanism for AVP steered by multiple MRI sequences and deep learning features. Employing deep learning feature representation, this framework presents a joint partitioned statistical shape model able to deal with healthy and pathological AVP. The deep learning assistance is particularly useful in the poor contrast regions, such as optic tracts and pathological areas. Our main contributions are: 1) a fast and robust shape localization method using conditional space deep learning, 2) a volumetric multiscale curvelet transform-based intensity normalization method for robust statistical model, and 3) optimally partitioned statistical shape and appearance models based on regional shape variations for greater local flexibility. Our method was evaluated on MRI sequences obtained from 165 pediatric subjects. A mean Dice similarity coefficient of 0.779 was obtained for the segmentation of the entire AVP (optic nerve only =0.791 ) using the leave-one-out validation. Results demonstrated that the proposed localized shape and sparse appearance-based learning approach significantly outperforms current state-of-the-art segmentation approaches and is as robust as the manual segmentation.
Collapse
|
20
|
Harrigan RL, Plassard AJ, Bryan FW, Caires G, Mawn LA, Dethrage LM, Pawate S, Galloway RL, Smith SA, Landman BA. Disambiguating the optic nerve from the surrounding cerebrospinal fluid: Application to MS-related atrophy. Magn Reson Med 2015; 75:414-22. [PMID: 25754412 DOI: 10.1002/mrm.25613] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2014] [Revised: 12/19/2014] [Accepted: 12/19/2014] [Indexed: 12/14/2022]
Abstract
PURPOSE Our goal is to develop an accurate, automated tool to characterize the optic nerve (ON) and cerebrospinal fluid (CSF) to better understand ON changes in disease. METHODS Multi-atlas segmentation is used to localize the ON and sheath on T2-weighted MRI (0.6 mm(3) resolution). A sum of Gaussian distributions is fit to coronal slice-wise intensities to extract six descriptive parameters, and a regression forest is used to map the model space to radii. The model is validated for consistency using tenfold cross-validation and for accuracy using a high resolution (0.4 mm(2) reconstructed to 0.15 mm(2)) in vivo sequence. We evaluated this model on 6 controls and 6 patients with multiple sclerosis (MS) and a history of optic neuritis. RESULTS In simulation, the model was found to have an explanatory R-squared for both ON and sheath radii greater than 0.95. The accuracy of the method was within the measurement error on the highest possible in vivo resolution. Comparing healthy controls and patients with MS, significant structural differences were found near the ON head and the chiasm, and structural trends agreed with the literature. CONCLUSION This is a first demonstration that the ON can be exclusively, quantitatively measured and separated from the surrounding CSF using MRI.
Collapse
Affiliation(s)
- Robert L Harrigan
- Department of Electrical Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - Andrew J Plassard
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Frederick W Bryan
- Department of Electrical Engineering, Vanderbilt University, Nashville, Tennessee, USA.,Institute of Imaging Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Gabriela Caires
- Biomedical Engineering, Federal University of Rio Grande do Norte, Natal, RN, Brazil
| | - Louise A Mawn
- Vanderbilt Eye Institute, Vanderbilt University, Nashville, Tennessee, USA
| | - Lindsey M Dethrage
- Institute of Imaging Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Siddharama Pawate
- Department of Neurology, Vanderbilt University, Nashville, Tennessee, USA
| | - Robert L Galloway
- Department of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - Seth A Smith
- Institute of Imaging Science, Vanderbilt University, Nashville, Tennessee, USA.,Department of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee, USA.,Department of Radiology, Vanderbilt University, Nashville, Tennessee, USA
| | - Bennett A Landman
- Department of Electrical Engineering, Vanderbilt University, Nashville, Tennessee, USA.,Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA.,Institute of Imaging Science, Vanderbilt University, Nashville, Tennessee, USA.,Department of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee, USA.,Department of Radiology, Vanderbilt University, Nashville, Tennessee, USA
| |
Collapse
|
21
|
Harrigan RL, Panda S, Asman AJ, Nelson KM, Chaganti S, DeLisi MP, Yvernault BCW, Smith SA, Galloway RL, Mawn LA, Landman BA. Robust optic nerve segmentation on clinically acquired computed tomography. J Med Imaging (Bellingham) 2014; 1:034006. [PMID: 26158064 PMCID: PMC4478967 DOI: 10.1117/1.jmi.1.3.034006] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2014] [Accepted: 11/17/2014] [Indexed: 11/14/2022] Open
Abstract
The optic nerve (ON) plays a critical role in many devastating pathological conditions. Segmentation of the ON has the ability to provide understanding of anatomical development and progression of diseases of the ON. Recently, methods have been proposed to segment the ON but progress toward full automation has been limited. We optimize registration and fusion methods for a new multi-atlas framework for automated segmentation of the ONs, eye globes, and muscles on clinically acquired computed tomography (CT) data. Briefly, the multi-atlas approach consists of determining a region of interest within each scan using affine registration, followed by nonrigid registration on reduced field of view atlases, and performing statistical fusion on the results. We evaluate the robustness of the approach by segmenting the ON structure in 501 clinically acquired CT scan volumes obtained from 183 subjects from a thyroid eye disease patient population. A subset of 30 scan volumes was manually labeled to assess accuracy and guide method choice. Of the 18 compared methods, the ANTS Symmetric Normalization registration and nonlocal spatial simultaneous truth and performance level estimation statistical fusion resulted in the best overall performance, resulting in a median Dice similarity coefficient of 0.77, which is comparable with inter-rater (human) reproducibility at 0.73.
Collapse
Affiliation(s)
- Robert L. Harrigan
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee 37235, United States
| | - Swetasudha Panda
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee 37235, United States
| | - Andrew J. Asman
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee 37235, United States
| | - Katrina M. Nelson
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee 37235, United States
| | - Shikha Chaganti
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee 37235, United States
| | - Michael P. DeLisi
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee 37235, United States
| | - Benjamin C. W. Yvernault
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee 37235, United States
| | - Seth A. Smith
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee 37235, United States
- Vanderbilt University, Department of Radiology and Radiological Sciences, Nashville, Tennessee 37235, United States
| | - Robert L. Galloway
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee 37235, United States
| | - Louise A. Mawn
- Vanderbilt University, Department of Ophthalmology and Neurological Surgery, Nashville, Tennessee 37235, United States
| | - Bennett A. Landman
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee 37235, United States
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee 37235, United States
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee 37235, United States
- Vanderbilt University, Department of Radiology and Radiological Sciences, Nashville, Tennessee 37235, United States
| |
Collapse
|
22
|
DeLisi MP, Mawn LA, Galloway RL. Image-guided transorbital procedures with endoscopic video augmentation. Med Phys 2014; 41:091901. [PMID: 25186388 PMCID: PMC4137863 DOI: 10.1118/1.4892181] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2014] [Revised: 06/17/2014] [Accepted: 07/20/2014] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Surgical interventions to the orbital space behind the eyeball are limited to highly invasive procedures due to the confined nature of the region along with the presence of several intricate soft tissue structures. A minimally invasive approach to orbital surgery would enable several therapeutic options, particularly new treatment protocols for optic neuropathies such as glaucoma. The authors have developed an image-guided system for the purpose of navigating a thin flexible endoscope to a specified target region behind the eyeball. Navigation within the orbit is particularly challenging despite its small volume, as the presence of fat tissue occludes the endoscopic visual field while the surgeon must constantly be aware of optic nerve position. This research investigates the impact of endoscopic video augmentation to targeted image-guided navigation in a series of anthropomorphic phantom experiments. METHODS A group of 16 surgeons performed a target identification task within the orbits of four skull phantoms. The task consisted of identifying the correct target, indicated by the augmented video and the preoperative imaging frames, out of four possibilities. For each skull, one orbital intervention was performed with video augmentation, while the other was done with the standard image guidance technique, in random order. RESULTS The authors measured a target identification accuracy of 95.3% and 85.9% for the augmented and standard cases, respectively, with statistically significant improvement in procedure time (Z=-2.044, p=0.041) and intraoperator mean procedure time (Z=2.456, p=0.014) when augmentation was used. CONCLUSIONS Improvements in both target identification accuracy and interventional procedure time suggest that endoscopic video augmentation provides valuable additional orientation and trajectory information in an image-guided procedure. Utilization of video augmentation in transorbital interventions could further minimize complication risk and enhance surgeon comfort and confidence in the procedure.
Collapse
Affiliation(s)
- Michael P DeLisi
- Department of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee 37235
| | - Louise A Mawn
- Department of Neurological Surgery, Vanderbilt University, Nashville, Tennessee 37235 and Department of Ophthalmology and Visual Sciences, Vanderbilt University, Nashville, Tennessee 37235
| | - Robert L Galloway
- Department of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee 37235 and Department of Neurological Surgery, Vanderbilt University, Nashville, Tennessee 37235
| |
Collapse
|
23
|
Panda S, Asman AJ, Khare SP, Thompson L, Mawn LA, Smith SA, Landman BA. Evaluation of Multi-Atlas Label Fusion for In Vivo MRI Orbital Segmentation. J Med Imaging (Bellingham) 2014; 1:024002. [PMID: 25558466 PMCID: PMC4280790 DOI: 10.1117/1.jmi.1.2.024002] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2013] [Revised: 05/15/2014] [Accepted: 06/24/2014] [Indexed: 11/14/2022] Open
Abstract
Multi-atlas methods have been successful for brain segmentation, but their application to smaller anatomies remains relatively unexplored. We evaluate 7 statistical and voting-based label fusion algorithms (and 6 additional variants) to segment the optic nerves, eye globes and chiasm. For non-local STAPLE, we evaluate different intensity similarity measures (including mean square difference, locally normalized cross correlation, and a hybrid approach). Each algorithm is evaluated in terms of the Dice overlap and symmetric surface distance metrics. Finally, we evaluate refinement of label fusion results using a learning based correction method for consistent bias correction and Markov random field regularization. The multi-atlas labeling pipelines were evaluated on a cohort of 35 subjects including both healthy controls and patients. Across all three structures, NLSS with a mixed weighting type provided the most consistent results; for the optic nerve NLSS resulted in a median Dice similarity coefficient of 0.81, mean surface distance of 0.41 mm and Hausdorff distance 2.18 mm for the optic nerves. Joint label fusion resulted in slightly superior median performance for the optic nerves (0.82, 0.39 mm and 2.15 mm), but slightly worse on the globes. The fully automated multi-atlas labeling approach provides robust segmentations of orbital structures on MRI even in patients for whom significant atrophy (optic nerve head drusen) or inflammation (multiple sclerosis) is present.
Collapse
Affiliation(s)
- Swetasudha Panda
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee 37235, United States
| | - Andrew J. Asman
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee 37235, United States
| | - Shweta P. Khare
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee 37235, United States
| | - Lindsey Thompson
- Vanderbilt University, Institute of Imaging Science, Nashville, Tennessee 37235, United States
| | - Louise A. Mawn
- Vanderbilt University, Department of Ophthalmology and Neurological Surgery, Nashville, Tennessee 37232, United States
| | - Seth A. Smith
- Vanderbilt University, Institute of Imaging Science, Nashville, Tennessee 37235, United States
- Vanderbilt University, Department of Radiology and Radiological Sciences, Nashville, Tennessee 37235, United States
| | - Bennett A. Landman
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee 37235, United States
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee 37235, United States
- Vanderbilt University, Institute of Imaging Science, Nashville, Tennessee 37235, United States
- Vanderbilt University, Department of Radiology and Radiological Sciences, Nashville, Tennessee 37235, United States
| |
Collapse
|
24
|
Roy S, Carass A, Jog A, Prince JL, Lee J. MR to CT Registration of Brains using Image Synthesis. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2014; 9034:spie.org/Publications/Proceedings/Paper/10.1117/12.2043954. [PMID: 25057341 PMCID: PMC4104818 DOI: 10.1117/12.2043954] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Computed tomography (CT) is the standard imaging modality for patient dose calculation for radiation therapy. Magnetic resonance (MR) imaging (MRI) is used along with CT to identify brain structures due to its superior soft tissue contrast. Registration of MR and CT is necessary for accurate delineation of the tumor and other structures, and is critical in radiotherapy planning. Mutual information (MI) or its variants are typically used as a similarity metric to register MRI to CT. However, unlike CT, MRI intensity does not have an accepted calibrated intensity scale. Therefore, MI-based MR-CT registration may vary from scan to scan as MI depends on the joint histogram of the images. In this paper, we propose a fully automatic framework for MR-CT registration by synthesizing a synthetic CT image from MRI using a co-registered pair of MR and CT images as an atlas. Patches of the subject MRI are matched to the atlas and the synthetic CT patches are estimated in a probabilistic framework. The synthetic CT is registered to the original CT using a deformable registration and the computed deformation is applied to the MRI. In contrast to most existing methods, we do not need any manual intervention such as picking landmarks or regions of interests. The proposed method was validated on ten brain cancer patient cases, showing 25% improvement in MI and correlation between MR and CT images after registration compared to state-of-the-art registration methods.
Collapse
Affiliation(s)
- Snehashis Roy
- Center for Neuroscience and Regenerative Medicine, Henry M. Jackson Foundation
| | - Aaron Carass
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| | - Amod Jog
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| | - Jerry L. Prince
- Image Analysis and Communications Laboratory, The Johns Hopkins University
| | - Junghoon Lee
- Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins School of Medicine
| |
Collapse
|
25
|
Deeley MA, Chen A, Datteri RD, Noble J, Cmelak A, Donnelly E, Malcolm A, Moretti L, Jaboin J, Niermann K, Yang ES, Yu DS, Dawant BM. Segmentation editing improves efficiency while reducing inter-expert variation and maintaining accuracy for normal brain tissues in the presence of space-occupying lesions. Phys Med Biol 2013; 58:4071-97. [PMID: 23685866 DOI: 10.1088/0031-9155/58/12/4071] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Image segmentation has become a vital and often rate-limiting step in modern radiotherapy treatment planning. In recent years, the pace and scope of algorithm development, and even introduction into the clinic, have far exceeded evaluative studies. In this work we build upon our previous evaluation of a registration driven segmentation algorithm in the context of 8 expert raters and 20 patients who underwent radiotherapy for large space-occupying tumours in the brain. In this work we tested four hypotheses concerning the impact of manual segmentation editing in a randomized single-blinded study. We tested these hypotheses on the normal structures of the brainstem, optic chiasm, eyes and optic nerves using the Dice similarity coefficient, volume, and signed Euclidean distance error to evaluate the impact of editing on inter-rater variance and accuracy. Accuracy analyses relied on two simulated ground truth estimation methods: simultaneous truth and performance level estimation and a novel implementation of probability maps. The experts were presented with automatic, their own, and their peers' segmentations from our previous study to edit. We found, independent of source, editing reduced inter-rater variance while maintaining or improving accuracy and improving efficiency with at least 60% reduction in contouring time. In areas where raters performed poorly contouring from scratch, editing of the automatic segmentations reduced the prevalence of total anatomical miss from approximately 16% to 8% of the total slices contained within the ground truth estimations. These findings suggest that contour editing could be useful for consensus building such as in developing delineation standards, and that both automated methods and even perhaps less sophisticated atlases could improve efficiency, inter-rater variance, and accuracy.
Collapse
Affiliation(s)
- M A Deeley
- Department of Radiology and Radiation Oncology, University of Vermont, Burlington, VT, USA.
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
26
|
Asman AJ, Delisi MP, Mawn, Galloway RL, Landman BA. Robust Non-Local Multi-Atlas Segmentation of the Optic Nerve. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2013; 8669:86691L. [PMID: 24478826 DOI: 10.1117/12.2007015] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Labeling or segmentation of structures of interest on medical images plays an essential role in both clinical and scientific understanding of the biological etiology, progression, and recurrence of pathological disorders. Here, we focus on the optic nerve, a structure that plays a critical role in many devastating pathological conditions - including glaucoma, ischemic neuropathy, optic neuritis and multiple-sclerosis. Ideally, existing fully automated procedures would result in accurate and robust segmentation of the optic nerve anatomy. However, current segmentation procedures often require manual intervention due to anatomical and imaging variability. Herein, we propose a framework for robust and fully-automated segmentation of the optic nerve anatomy. First, we provide a robust registration procedure that results in consistent registrations, despite highly varying data in terms of voxel resolution and image field-of-view. Additionally, we demonstrate the efficacy of a recently proposed non-local label fusion algorithm that accounts for small scale errors in registration correspondence. On a dataset consisting of 31 highly varying computed tomography (CT) images of the human brain, we demonstrate that the proposed framework consistently results in accurate segmentations. In particular, we show (1) that the proposed registration procedure results in robust registrations of the optic nerve anatomy, and (2) that the non-local statistical fusion algorithm significantly outperforms several of the state-of-the-art label fusion algorithms.
Collapse
Affiliation(s)
- Andrew J Asman
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| | - Michael P Delisi
- Biomedical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| | - Mawn
- Ophthalmology and Neurological Surgery, Vanderbilt University, Nashville, TN, USA 37235
| | - Robert L Galloway
- Biomedical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| | - Bennett A Landman
- Electrical Engineering, Vanderbilt University, Nashville, TN, USA 37235 ; Biomedical Engineering, Vanderbilt University, Nashville, TN, USA 37235
| |
Collapse
|
27
|
Asman AJ, Landman BA. Non-local statistical label fusion for multi-atlas segmentation. Med Image Anal 2012; 17:194-208. [PMID: 23265798 DOI: 10.1016/j.media.2012.10.002] [Citation(s) in RCA: 172] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2012] [Revised: 10/19/2012] [Accepted: 10/29/2012] [Indexed: 11/19/2022]
Abstract
Multi-atlas segmentation provides a general purpose, fully-automated approach for transferring spatial information from an existing dataset ("atlases") to a previously unseen context ("target") through image registration. The method to resolve voxelwise label conflicts between the registered atlases ("label fusion") has a substantial impact on segmentation quality. Ideally, statistical fusion algorithms (e.g., STAPLE) would result in accurate segmentations as they provide a framework to elegantly integrate models of rater performance. The accuracy of statistical fusion hinges upon accurately modeling the underlying process of how raters err. Despite success on human raters, current approaches inaccurately model multi-atlas behavior as they fail to seamlessly incorporate exogenous intensity information into the estimation process. As a result, locally weighted voting algorithms represent the de facto standard fusion approach in clinical applications. Moreover, regardless of the approach, fusion algorithms are generally dependent upon large atlas sets and highly accurate registration as they implicitly assume that the registered atlases form a collectively unbiased representation of the target. Herein, we propose a novel statistical fusion algorithm, Non-Local STAPLE (NLS). NLS reformulates the STAPLE framework from a non-local means perspective in order to learn what label an atlas would have observed, given perfect correspondence. Through this reformulation, NLS (1) seamlessly integrates intensity into the estimation process, (2) provides a theoretically consistent model of multi-atlas observation error, and (3) largely diminishes the need for large atlas sets and very high-quality registrations. We assess the sensitivity and optimality of the approach and demonstrate significant improvement in two empirical multi-atlas experiments.
Collapse
Affiliation(s)
- Andrew J Asman
- Electrical Engineering, Vanderbilt University, Nashville, TN 37235-1679, USA.
| | | |
Collapse
|
28
|
McRackan TR, Reda FA, Rivas A, Noble JH, Dietrich MS, Dawant BM, Labadie RF. Comparison of cochlear implant relevant anatomy in children versus adults. Otol Neurotol 2012; 33:328-34. [PMID: 22377644 PMCID: PMC3321365 DOI: 10.1097/mao.0b013e318245cc9f] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
HYPOTHESIS To test whether there are significant differences in pediatric and adult temporal bone anatomy as related to cochlear implant (CI) surgery. BACKGROUND Surgeons rely upon anatomic landmarks including the round window (RW) and facial recess (FR) to place CI electrodes within the scala tympani. Anecdotally, clinicians report differences in orientation of such structures in children versus adults. METHODS Institutional review board approval was obtained. High-resolution computed tomographic scans of 24 pediatric patients (46 ears) and 20 adult patients (40 ears) were evaluated using software consisting of a model-based segmentation algorithm that automatically localizes and segments temporal bone anatomy (e.g., facial nerve, chorda tympani, external auditory canal [EAC], and cochlea). On these scans, angles pertinent anatomy were manually delineated and measured blinded as to the age of the patient. RESULTS The EAC and FR were more parallel to the basal turn (BT) of the cochlea in children versus adults ([symbol in text] EAC:BT 20.55 degrees versus 24.28 degrees, p = 0.003; [symbol in text] FR:BT 5.15 degrees versus 6.88 degrees, p = 0.009). The RW was more closely aligned with the FR in children versus adults ([symbol in text] FR:RW 30.43 degrees versus 36.67 degrees, p = 0.009). Comparing the lateral portion of the EAC (using LatEAC as a marker) to the most medial portion (using [symbol in text] TM as a marker), the measured angle was 136.57 degrees in children and 172.20 degrees in adults (p < 0.001). CONCLUSION There are significant differences in the temporal bone anatomy of children versus adults pertinent to CI electrode insertion.
Collapse
Affiliation(s)
- Theodore R McRackan
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee 37232-8606, USA
| | | | | | | | | | | | | |
Collapse
|
29
|
Chen X, Udupa JK, Bagci U, Zhuge Y, Yao J. Medical image segmentation by combining graph cuts and oriented active appearance models. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:2035-46. [PMID: 22311862 PMCID: PMC5548181 DOI: 10.1109/tip.2012.2186306] [Citation(s) in RCA: 110] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
In this paper, we propose a novel method based on a strategic combination of the active appearance model (AAM), live wire (LW), and graph cuts (GCs) for abdominal 3-D organ segmentation. The proposed method consists of three main parts: model building, object recognition, and delineation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the recognition part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW methods, resulting in the oriented AAM (OAAM). A multiobject strategy is utilized to help in object initialization. We employ a pseudo-3-D initialization strategy and segment the organs slice by slice via a multiobject OAAM method. For the object delineation part, a 3-D shape-constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT data set and also on the MICCAI 2007 Grand Challenge liver data set. The results show the following: 1) The overall segmentation accuracy of true positive volume fraction TPVF > 94.3% and false positive volume fraction can be achieved; 2) the initialization performance can be improved by combining the AAM and LW; 3) the multiobject strategy greatly facilitates initialization; 4) compared with the traditional 3-D AAM method, the pseudo-3-D OAAM method achieves comparable performance while running 12 times faster; and 5) the performance of the proposed method is comparable to state-of-the-art liver segmentation algorithm. The executable version of the 3-D shape-constrained GC method with a user interface can be downloaded from http://xinjianchen.wordpress.com/research/.
Collapse
Affiliation(s)
- Xinjian Chen
- Department of Radiology and Imaging Sciences, Clinical Center, National Institute of Health, Bethesda, MD 20814, USA.
| | | | | | | | | |
Collapse
|
30
|
Reda FA, Noble JH, Rivas A, McRackan TR, Labadie RF, Dawant BM. Automatic segmentation of the facial nerve and chorda tympani in pediatric CT scans. Med Phys 2011; 38:5590-600. [PMID: 21992377 DOI: 10.1118/1.3634048] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Cochlear implant surgery is used to implant an electrode array in the cochlea to treat hearing loss. The authors recently introduced a minimally invasive image-guided technique termed percutaneous cochlear implantation. This approach achieves access to the cochlea by drilling a single linear channel from the outer skull into the cochlea via the facial recess, a region bounded by the facial nerve and chorda tympani. To exploit existing methods for computing automatically safe drilling trajectories, the facial nerve and chorda tympani need to be segmented. The goal of this work is to automatically segment the facial nerve and chorda tympani in pediatric CT scans. METHODS The authors have proposed an automatic technique to achieve the segmentation task in adult patients that relies on statistical models of the structures. These models contain intensity and shape information along the central axes of both structures. In this work, the authors attempted to use the same method to segment the structures in pediatric scans. However, the authors learned that substantial differences exist between the anatomy of children and that of adults, which led to poor segmentation results when an adult model is used to segment a pediatric volume. Therefore, the authors built a new model for pediatric cases and used it to segment pediatric scans. Once this new model was built, the authors employed the same segmentation method used for adults with algorithm parameters that were optimized for pediatric anatomy. RESULTS A validation experiment was conducted on 10 CT scans in which manually segmented structures were compared to automatically segmented structures. The mean, standard deviation, median, and maximum segmentation errors were 0.23, 0.17, 0.18, and 1.27 mm, respectively. CONCLUSIONS The results indicate that accurate segmentation of the facial nerve and chorda tympani in pediatric scans is achievable, thus suggesting that safe drilling trajectories can also be computed automatically.
Collapse
Affiliation(s)
- Fitsum A Reda
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| | | | | | | | | | | |
Collapse
|
31
|
A new approach for tubular structure modeling and segmentation using graph-based techniques. ACTA ACUST UNITED AC 2011. [PMID: 22003713 DOI: 10.1007/978-3-642-23626-6_38] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
Abstract
In this work, a new approach for tubular structure segmentation is presented. This approach consists of two parts: (1) automatic model construction from manually segmented exemplars and (2) segmentation of structures in unknown images using these models. The segmentation problem is solved by finding an optimal path in a high-dimensional graph. The graph is designed with novel structures that permit the incorporation of prior information from the model into the optimization process and account for several weaknesses of traditional graph-based approaches. The generality of the approach is demonstrated by testing it on four challenging segmentation tasks: the optic pathways, the facial nerve, the chorda tympani, and the carotid artery. In all four cases, excellent agreement between automatic and manual segmentations is achieved.
Collapse
|
32
|
Deeley MA, Chen A, Datteri R, Noble JH, Cmelak AJ, Donnelly EF, Malcolm AW, Moretti L, Jaboin J, Niermann K, Yang ES, Yu DS, Yei F, Koyama T, Ding GX, Dawant BM. Comparison of manual and automatic segmentation methods for brain structures in the presence of space-occupying lesions: a multi-expert study. Phys Med Biol 2011; 56:4557-77. [PMID: 21725140 DOI: 10.1088/0031-9155/56/14/021] [Citation(s) in RCA: 86] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
The purpose of this work was to characterize expert variation in segmentation of intracranial structures pertinent to radiation therapy, and to assess a registration-driven atlas-based segmentation algorithm in that context. Eight experts were recruited to segment the brainstem, optic chiasm, optic nerves, and eyes, of 20 patients who underwent therapy for large space-occupying tumors. Performance variability was assessed through three geometric measures: volume, Dice similarity coefficient, and Euclidean distance. In addition, two simulated ground truth segmentations were calculated via the simultaneous truth and performance level estimation algorithm and a novel application of probability maps. The experts and automatic system were found to generate structures of similar volume, though the experts exhibited higher variation with respect to tubular structures. No difference was found between the mean Dice similarity coefficient (DSC) of the automatic and expert delineations as a group at a 5% significance level over all cases and organs. The larger structures of the brainstem and eyes exhibited mean DSC of approximately 0.8-0.9, whereas the tubular chiasm and nerves were lower, approximately 0.4-0.5. Similarly low DSCs have been reported previously without the context of several experts and patient volumes. This study, however, provides evidence that experts are similarly challenged. The average maximum distances (maximum inside, maximum outside) from a simulated ground truth ranged from (-4.3, +5.4) mm for the automatic system to (-3.9, +7.5) mm for the experts considered as a group. Over all the structures in a rank of true positive rates at a 2 mm threshold from the simulated ground truth, the automatic system ranked second of the nine raters. This work underscores the need for large scale studies utilizing statistically robust numbers of patients and experts in evaluating quality of automatic algorithms.
Collapse
Affiliation(s)
- M A Deeley
- Department of Radiation Oncology, Vanderbilt University, Nashville, TN, USA.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
33
|
Chen X, Udupa JK, Alavi A, Torigian DA. Automatic anatomy recognition via multiobject oriented active shape models. Med Phys 2010; 37:6390-401. [PMID: 21302796 PMCID: PMC3003721 DOI: 10.1118/1.3515751] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2010] [Revised: 10/15/2010] [Accepted: 10/25/2010] [Indexed: 11/07/2022] Open
Abstract
PURPOSE This paper studies the feasibility of developing an automatic anatomy recognition (AAR) system in clinical radiology and demonstrates its operation on clinical 2D images. METHODS The anatomy recognition method described here consists of two main components: (a) multiobject generalization of OASM and (b) object recognition strategies. The OASM algorithm is generalized to multiple objects by including a model for each object and assigning a cost structure specific to each object in the spirit of live wire. The delineation of multiobject boundaries is done in MOASM via a three level dynamic programming algorithm, wherein the first level is at pixel level which aims to find optimal oriented boundary segments between successive landmarks, the second level is at landmark level which aims to find optimal location for the landmarks, and the third level is at the object level which aims to find optimal arrangement of object boundaries over all objects. The object recognition strategy attempts to find that pose vector (consisting of translation, rotation, and scale component) for the multiobject model that yields the smallest total boundary cost for all objects. The delineation and recognition accuracies were evaluated separately utilizing routine clinical chest CT, abdominal CT, and foot MRI data sets. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF and FPVF). The recognition accuracy was assessed (1) in terms of the size of the space of the pose vectors for the model assembly that yielded high delineation accuracy, (2) as a function of the number of objects and objects' distribution and size in the model, (3) in terms of the interdependence between delineation and recognition, and (4) in terms of the closeness of the optimum recognition result to the global optimum. RESULTS When multiple objects are included in the model, the delineation accuracy in terms of TPVF can be improved to 97%-98% with a low FPVF of 0.1%-0.2%. Typically, a recognition accuracy of > or = 90% yielded a TPVF > or = 95% and FPVF < or = 0.5%. Over the three data sets and over all tested objects, in 97% of the cases, the optimal solutions found by the proposed method constituted the true global optimum. CONCLUSIONS The experimental results showed the feasibility and efficacy of the proposed automatic anatomy recognition system. Increasing the number of objects in the model can significantly improve both recognition and delineation accuracy. More spread out arrangement of objects in the model can lead to improved recognition and delineation accuracy. Including larger objects in the model also improved recognition and delineation. The proposed method almost always finds globally optimum solutions.
Collapse
Affiliation(s)
- Xinjian Chen
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Room 1C515, Building 10, Bethesda, Maryland 20892-1182, USA
| | | | | | | |
Collapse
|