51
|
Walluscheck S, Canalini L, Strohm H, Diekmann S, Klein J, Heldmann S. MR-CT multi-atlas registration guided by fully automated brain structure segmentation with CNNs. Int J Comput Assist Radiol Surg 2023; 18:483-491. [PMID: 36334164 PMCID: PMC9939492 DOI: 10.1007/s11548-022-02786-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 10/25/2022] [Indexed: 11/08/2022]
Abstract
PURPOSE Computed tomography (CT) is widely used to identify anomalies in brain tissues because their localization is important for diagnosis and therapy planning. Due to the insufficient soft tissue contrast of CT, the division of the brain into anatomical meaningful regions is challenging and is commonly done with magnetic resonance imaging (MRI). METHODS We propose a multi-atlas registration approach to propagate anatomical information from a standard MRI brain atlas to CT scans. This translation will enable a detailed automated reporting of brain CT exams. We utilize masks of the lateral ventricles and the brain volume of CT images as adjuvant input to guide the registration process. Besides using manual annotations to test the registration in a first step, we then verify that convolutional neural networks (CNNs) are a reliable solution for automatically segmenting structures to enhance the registration process. RESULTS The registration method obtains mean Dice values of 0.92 and 0.99 in brain ventricles and parenchyma on 22 healthy test cases when using manually segmented structures as guidance. When guiding with automatically segmented structures, the mean Dice values are 0.87 and 0.98, respectively. CONCLUSION Our registration approach is a fully automated solution to register MRI atlas images to CT scans and thus obtain detailed anatomical information. The proposed CNN segmentation method can be used to obtain masks of ventricles and brain volume which guide the registration.
Collapse
Affiliation(s)
- Sina Walluscheck
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
| | - Luca Canalini
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Hannah Strohm
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Susanne Diekmann
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Jan Klein
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Stefan Heldmann
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| |
Collapse
|
52
|
Bustamante M, Viola F, Engvall J, Carlhäll C, Ebbers T. Automatic Time-Resolved Cardiovascular Segmentation of 4D Flow MRI Using Deep Learning. J Magn Reson Imaging 2023; 57:191-203. [PMID: 35506525 PMCID: PMC10946960 DOI: 10.1002/jmri.28221] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 04/14/2022] [Accepted: 04/15/2022] [Indexed: 02/03/2023] Open
Abstract
BACKGROUND Segmenting the whole heart over the cardiac cycle in 4D flow MRI is a challenging and time-consuming process, as there is considerable motion and limited contrast between blood and tissue. PURPOSE To develop and evaluate a deep learning-based segmentation method to automatically segment the cardiac chambers and great thoracic vessels from 4D flow MRI. STUDY TYPE Retrospective. SUBJECTS A total of 205 subjects, including 40 healthy volunteers and 165 patients with a variety of cardiac disorders were included. Data were randomly divided into training (n = 144), validation (n = 20), and testing (n = 41) sets. FIELD STRENGTH/SEQUENCE A 3 T/time-resolved velocity encoded 3D gradient echo sequence (4D flow MRI). ASSESSMENT A 3D neural network based on the U-net architecture was trained to segment the four cardiac chambers, aorta, and pulmonary artery. The segmentations generated were compared to manually corrected atlas-based segmentations. End-diastolic (ED) and end-systolic (ES) volumes of the four cardiac chambers were calculated for both segmentations. STATISTICAL TESTS Dice score, Hausdorff distance, average surface distance, sensitivity, precision, and miss rate were used to measure segmentation accuracy. Bland-Altman analysis was used to evaluate agreement between volumetric parameters. RESULTS The following evaluation metrics were computed: mean Dice score (0.908 ± 0.023) (mean ± SD), Hausdorff distance (1.253 ± 0.293 mm), average surface distance (0.466 ± 0.136 mm), sensitivity (0.907 ± 0.032), precision (0.913 ± 0.028), and miss rate (0.093 ± 0.032). Bland-Altman analyses showed good agreement between volumetric parameters for all chambers. Limits of agreement as percentage of mean chamber volume (LoA%), left ventricular: 9.3%, 13.5%, left atrial: 12.4%, 16.9%, right ventricular: 9.9%, 15.6%, and right atrial: 18.7%, 14.4%; for ED and ES, respectively. DATA CONCLUSION The addition of this technique to the 4D flow MRI assessment pipeline could expedite and improve the utility of this type of acquisition in the clinical setting. EVIDENCE LEVEL 4 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Mariana Bustamante
- Division of Diagnostics and Specialist Medicine, Department of Health, Medicine and Caring SciencesLinköping UniversityLinköpingSweden
- Center for Medical Image Science and Visualization (CMIV)Linköping UniversityLinköpingSweden
| | - Federica Viola
- Division of Diagnostics and Specialist Medicine, Department of Health, Medicine and Caring SciencesLinköping UniversityLinköpingSweden
| | - Jan Engvall
- Division of Diagnostics and Specialist Medicine, Department of Health, Medicine and Caring SciencesLinköping UniversityLinköpingSweden
- Department of Clinical Physiology in Linköping, Department of Health, Medicine and Caring SciencesLinköping UniversityLinköpingSweden
| | - Carl‐Johan Carlhäll
- Division of Diagnostics and Specialist Medicine, Department of Health, Medicine and Caring SciencesLinköping UniversityLinköpingSweden
- Center for Medical Image Science and Visualization (CMIV)Linköping UniversityLinköpingSweden
- Department of Clinical Physiology in Linköping, Department of Health, Medicine and Caring SciencesLinköping UniversityLinköpingSweden
| | - Tino Ebbers
- Division of Diagnostics and Specialist Medicine, Department of Health, Medicine and Caring SciencesLinköping UniversityLinköpingSweden
- Center for Medical Image Science and Visualization (CMIV)Linköping UniversityLinköpingSweden
| |
Collapse
|
53
|
VilasBoas-Ribeiro I, Franckena M, van Rhoon GC, Hernández-Tamames JA, Paulides MM. Using MRI to measure position and anatomy changes and assess their impact on the accuracy of hyperthermia treatment planning for cervical cancer. Int J Hyperthermia 2022; 40:2151648. [PMID: 36535922 DOI: 10.1080/02656736.2022.2151648] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
PURPOSE We studied the differences between planning and treatment position, their impact on the accuracy of hyperthermia treatment planning (HTP) predictions, and the relevance of including true treatment anatomy and position in HTP based on magnetic resonance (MR) images. MATERIALS AND METHODS All volunteers were scanned with an MR-compatible hyperthermia device, including a filled waterbolus, to replicate the treatment setup. In the planning setup, the volunteers were scanned without the device to reproduce the imaging in the current HTP. First, we used rigid registration to investigate the patient position displacements between the planning and treatment setup. Second, we performed HTP for the planning anatomy at both positions and the treatment mimicking anatomy to study the effects of positioning and anatomy on the quality of the simulated hyperthermia treatment. Treatment quality was evaluated using SAR-based parameters. RESULTS We found an average displacement of 2 cm between planning and treatment positions. These displacements caused average absolute differences of ∼12% for TC25 and 10.4%-15.9% in THQ. Furthermore, we found that including the accurate treatment position and anatomy in treatment planning led to an improvement of 2% in TC25 and 4.6%-10.6% in THQ. CONCLUSIONS This study showed that precise patient position and anatomy are relevant since these affect the accuracy of HTP predictions. The major part of improved accuracy is related to implementing the correct position of the patient in the applicator. Hence, our study shows a clear incentive to accurately match the patient position in HTP with the actual treatment.
Collapse
Affiliation(s)
- Iva VilasBoas-Ribeiro
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Martine Franckena
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Gerard C van Rhoon
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, The Netherlands.,Department of Applied Radiation and Isotopes, Reactor Institute Delft, Delft University of Technology, Delft, The Netherlands
| | - Juan A Hernández-Tamames
- Department of Radiology and Nuclear Medicine, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Margarethus M Paulides
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, The Netherlands.,Care and Cure research lab (EM-4C&C) of the Electromagnetics Group, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
54
|
Zhang C, Porto A, Rolfe S, Kocatulum A, Maga AM. Automated landmarking via multiple templates. PLoS One 2022; 17:e0278035. [PMID: 36454982 PMCID: PMC9714854 DOI: 10.1371/journal.pone.0278035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 11/08/2022] [Indexed: 12/02/2022] Open
Abstract
Manually collecting landmarks for quantifying complex morphological phenotypes can be laborious and subject to intra and interobserver errors. However, most automated landmarking methods for efficiency and consistency fall short of landmarking highly variable samples due to the bias introduced by the use of a single template. We introduce a fast and open source automated landmarking pipeline (MALPACA) that utilizes multiple templates for accommodating large-scale variations. We also introduce a K-means method of choosing the templates that can be used in conjunction with MALPACA, when no prior information for selecting templates is available. Our results confirm that MALPACA significantly outperforms single-template methods in landmarking both single and multi-species samples. K-means based template selection can also avoid choosing the worst set of templates when compared to random template selection. We further offer an example of post-hoc quality check for each individual template for further refinement. In summary, MALPACA is an efficient and reproducible method that can accommodate large morphological variability, such as those commonly found in evolutionary studies. To support the research community, we have developed open-source and user-friendly software tools for performing K-means multi-templates selection and MALPACA.
Collapse
Affiliation(s)
- Chi Zhang
- Center for Development Biology and Regenerative Medicine, Seattle Children’s Research Institute, Seattle, Washington, United States of America
| | - Arthur Porto
- Department of Biological Sciences, Louisiana State University, Baton Rouge, Louisiana, United States of America
- Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana, United States of America
| | - Sara Rolfe
- Center for Development Biology and Regenerative Medicine, Seattle Children’s Research Institute, Seattle, Washington, United States of America
- Friday Harbor Laboratories, University of Washington, San Juan Island, Washington, United States of America
| | - Altan Kocatulum
- Alfred University, Alfred, New York, United States of America
| | - A. Murat Maga
- Center for Development Biology and Regenerative Medicine, Seattle Children’s Research Institute, Seattle, Washington, United States of America
- Division of Craniofacial Medicine, Department of Pediatrics, University of Washington, Seattle, Washington, United States of America
- * E-mail:
| |
Collapse
|
55
|
Chadoulos CG, Tsaopoulos DE, Moustakidis S, Tsakiridis NL, Theocharis JB. A novel multi-atlas segmentation approach under the semi-supervised learning framework: Application to knee cartilage segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 227:107208. [PMID: 36384059 DOI: 10.1016/j.cmpb.2022.107208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 10/19/2022] [Accepted: 10/27/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE Multi-atlas based segmentation techniques, which rely on an atlas library comprised of training images labeled by an expert, have proven their effectiveness in multiple automatic segmentation applications. However, the usage of exhaustive patch libraries combined with the voxel-wise labeling incur a large computational cost in terms of memory requirements and execution times. METHODS To confront this shortcoming, we propose a novel two-stage multi-atlas approach designed under the Semi-Supervised Learning (SSL) framework. The main properties of our method are as follows: First, instead of the voxel-wise labeling approach, the labeling of target voxels is accomplished here by exploiting the spectral content of globally sampled datasets from the target image, along with their spatially correspondent data collected from the atlases. Following SSL, voxels classification is boosted by incorporating unlabeled data from the target image, in addition to the labeled ones from atlas library. Our scheme integrates constructively fruitful concepts, including sparse reconstructions of voxels from linear neighborhoods, HOG feature descriptors of patches/regions, and label propagation via sparse graph constructions. Segmentation of the target image is carried out in two stages: stage-1 focuses on the sampling and labeling of global data, while stage-2 undertakes the above tasks for the out-of-sample data. Finally, we propose different graph-based methods for the labeling of global data, while these methods are extended to deal with the out-of-sample voxels. RESULTS A thorough experimental investigation is conducted on 76 subjects provided by the publicly accessible Osteoarthritis Initiative (OAI) repository. Comparative results and statistical analysis demonstrate that the suggested methodology exhibits superior segmentation performance compared to the existing patch-based methods, across all evaluation metrics (DSC:88.89%, Precision: 89.86%, Recall: 88.12%), while at the same time it requires a considerably reduced computational load (>70% reduction on average execution time with respect to other patch-based). In addition, our approach is favorably compared against other non patch-based and deep learning methods in terms of performance accuracy (on the 3-class problem). A final experimentation on a 5-class setting of the problems demonstrates that our approach is capable of achieving performance comparable to existing state-of-the-art knee cartilage segmentation methods (DSC:88.22% and DSC:85.84% for femoral and tibial cartilage respectively).
Collapse
Affiliation(s)
- Christos G Chadoulos
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, 54124, Greece.
| | - Dimitrios E Tsaopoulos
- Institute for Bio-Economy and Agri-Technology, Centre for Research and Technology Hellas, Volos, 38333, Greece.
| | | | - Nikolaos L Tsakiridis
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, 54124, Greece.
| | - John B Theocharis
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, 54124, Greece.
| |
Collapse
|
56
|
Barzegar Z, Jamzad M. An Efficient Optimization Approach for Glioma Tumor Segmentation in Brain MRI. J Digit Imaging 2022; 35:1634-1647. [PMID: 35995900 PMCID: PMC9712883 DOI: 10.1007/s10278-022-00655-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 04/22/2022] [Accepted: 05/06/2022] [Indexed: 11/29/2022] Open
Abstract
Glioma is an aggressive type of cancer that develops in the brain or spinal cord. Due to many differences in its shape and appearance, accurate segmentation of glioma for identifying all parts of the tumor and its surrounding cancerous tissues is a challenging task. In recent researches, the combination of multi-atlas segmentation and machine learning methods provides robust and accurate results by learning from annotated atlas datasets. To overcome the side effects of limited existing information on atlas-based segmentation, and the long training phase of learning methods, we proposed a semi-supervised unified framework for multi-label segmentation that formulates this problem in terms of a Markov Random Field energy optimization on a parametric graph. To evaluate the proposed framework, we apply it to publicly available BRATS datasets, including low- and high-grade glioma tumors. Experimental results indicate competitive performance compared to the state-of-the-art methods. Compared with the top ranked methods, the proposed framework obtains the best dice score for segmenting of "whole tumor" (WT), "tumor core" (TC ) and "enhancing active tumor" (ET) regions. The achieved accuracy is 94[Formula: see text] characterized by the mean dice score. The motivation of using MRF graph is to map the segmentation problem to an optimization model in a graphical environment. Therefore, by defining perfect graph structure and optimum constraints and flows in the continuous max-flow model, the segmentation is performed precisely.
Collapse
Affiliation(s)
- Zeynab Barzegar
- Present Address: Sharif University of Technology, Tehran, Iran
| | - Mansour Jamzad
- Present Address: Sharif University of Technology, Tehran, Iran
| |
Collapse
|
57
|
Leary D, Basran PS. The role of artificial intelligence in veterinary radiation oncology. Vet Radiol Ultrasound 2022; 63 Suppl 1:903-912. [PMID: 36514233 DOI: 10.1111/vru.13162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 01/21/2022] [Accepted: 04/12/2022] [Indexed: 12/15/2022] Open
Abstract
Veterinary radiation oncology regularly deploys sophisticated contouring, image registration, and treatment planning optimization software for patient care. Over the past decade, advances in computing power and the rapid development of neural networks, open-source software packages, and data science have been realized and resulted in new research and clinical applications of artificial intelligent (AI) systems in radiation oncology. These technologies differ from conventional software in their level of complexity and ability to learn from representative and local data. We provide clinical and research application examples of AI in human radiation oncology and their potential applications in veterinary medicine throughout the patient's care-path: from treatment simulation, deformable registration, auto-segmentation, automated treatment planning and plan selection, quality assurance, adaptive radiotherapy, and outcomes modeling. These technologies have the potential to offer significant time and cost savings in the veterinary setting; however, since the range of usefulness of these technologies have not been well studied nor understood, care must be taken if adopting AI technologies in clinical practice. Over the next several years, some practical and realizable applications of AI in veterinary radiation oncology include automated segmentation of normal tissues and tumor volumes, deformable registration, multi-criteria plan optimization, and adaptive radiotherapy. Keys in achieving success in adopting AI in veterinary radiation oncology include: establishing "truth-data"; data harmonization; multi-institutional data and collaborations; standardized dose reporting and taxonomy; adopting an open access philosophy, data collection and curation; open-source algorithm development; and transparent and platform-independent code development.
Collapse
Affiliation(s)
- Del Leary
- Department of Environment and Radiological Health Sciences, College of Veterinary Medicine and Biomedical Sciences, Colorado State University, Fort Collins, Colorado, USA
| | - Parminder S Basran
- Department of Clinical Sciences, College of Veterinary Medicine, Cornell University, Ithaca, New York, USA
| |
Collapse
|
58
|
Ren M, Dey N, Styner MA, Botteron KN, Gerig G. Local Spatiotemporal Representation Learning for Longitudinally-consistent Neuroimage Analysis. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2022; 35:13541-13556. [PMID: 37614415 PMCID: PMC10445502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
Recent self-supervised advances in medical computer vision exploit the global and local anatomical self-similarity for pretraining prior to downstream tasks such as segmentation. However, current methods assume i.i.d. image acquisition, which is invalid in clinical study designs where follow-up longitudinal scans track subject-specific temporal changes. Further, existing self-supervised methods for medically-relevant image-to-image architectures exploit only spatial or temporal self-similarity and do so via a loss applied only at a single image-scale, with naive multi-scale spatiotemporal extensions collapsing to degenerate solutions. To these ends, this paper makes two contributions: (1) It presents a local and multi-scale spatiotemporal representation learning method for image-to-image architectures trained on longitudinal images. It exploits the spatiotemporal self-similarity of learned multi-scale intra-subject image features for pretraining and develops several feature-wise regularizations that avoid degenerate representations; (2) During finetuning, it proposes a surprisingly simple self-supervised segmentation consistency regularization to exploit intra-subject correlation. Benchmarked across various segmentation tasks, the proposed framework outperforms both well-tuned randomly-initialized baselines and current self-supervised techniques designed for both i.i.d. and longitudinal datasets. These improvements are demonstrated across both longitudinal neurodegenerative adult MRI and developing infant brain MRI and yield both higher performance and longitudinal consistency.
Collapse
|
59
|
Huang K, Huang S, Chen G, Li X, Li S, Liang Y, Gao Y. An end-to-end multi-task system of automatic lesion detection and anatomical localization in whole-body bone scintigraphy by deep learning. Bioinformatics 2022; 39:6842323. [PMID: 36416135 PMCID: PMC9805554 DOI: 10.1093/bioinformatics/btac753] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 10/25/2022] [Accepted: 11/22/2022] [Indexed: 11/24/2022] Open
Abstract
SUMMARY Limited by spatial resolution and visual contrast, bone scintigraphy interpretation is susceptible to subjective factors, which considerably affects the accuracy and repeatability of lesion detection and anatomical localization. In this work, we design and implement an end-to-end multi-task deep learning model to perform automatic lesion detection and anatomical localization in whole-body bone scintigraphy. A total of 617 whole-body bone scintigraphy cases including anterior and posterior views were retrospectively analyzed. The proposed semi-supervised model consists of two task flows. The first one, the lesion segmentation flow, received image patches and was trained in a supervised way. The other one, skeleton segmentation flow, was trained on as few as five labeled images in conjunction with the multi-atlas approach, in a semi-supervised way. The two flows joint in their encoder layers so each flow can capture more generalized distribution of the sample space and extract more abstract deep features. The experimental results show that the architecture achieved the highest precision in the finest bone segmentation task in both anterior and posterior images of whole-body scintigraphy. Such an end-to-end approach with very few manual annotation requirement would be suitable for algorithm deployment. Moreover, the proposed approach reliably balances unsupervised labels construction and supervised learning, providing useful insight for weakly labeled image analysis. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
| | | | - Guojing Chen
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518037, China
| | - Xue Li
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518037, China
| | - Shawn Li
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518037, China
| | - Ying Liang
- To whom correspondence should be addressed. or
| | - Yi Gao
- To whom correspondence should be addressed. or
| |
Collapse
|
60
|
Jönsson H, Ekström S, Strand R, Pedersen MA, Molin D, Ahlström H, Kullberg J. An image registration method for voxel-wise analysis of whole-body oncological PET-CT. Sci Rep 2022; 12:18768. [PMID: 36335130 PMCID: PMC9637131 DOI: 10.1038/s41598-022-23361-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 10/31/2022] [Indexed: 11/08/2022] Open
Abstract
Whole-body positron emission tomography-computed tomography (PET-CT) imaging in oncology provides comprehensive information of each patient's disease status. However, image interpretation of volumetric data is a complex and time-consuming task. In this work, an image registration method targeted towards computer-aided voxel-wise analysis of whole-body PET-CT data was developed. The method used both CT images and tissue segmentation masks in parallel to spatially align images step-by-step. To evaluate its performance, a set of baseline PET-CT images of 131 classical Hodgkin lymphoma (cHL) patients and longitudinal image series of 135 head and neck cancer (HNC) patients were registered between and within subjects according to the proposed method. Results showed that major organs and anatomical structures generally were registered correctly. Whole-body inverse consistency vector and intensity magnitude errors were on average less than 5 mm and 45 Hounsfield units respectively in both registration tasks. Image registration was feasible in time and the nearly automatic pipeline enabled efficient image processing. Metabolic tumor volumes of the cHL patients and registration-derived therapy-related tissue volume change of the HNC patients mapped to template spaces confirmed proof-of-concept. In conclusion, the method established a robust point-correspondence and enabled quantitative visualization of group-wise image features on voxel level.
Collapse
Affiliation(s)
- Hanna Jönsson
- Section of Radiology, Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden.
| | - Simon Ekström
- Section of Radiology, Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden
| | - Robin Strand
- Section of Radiology, Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden
- Department of Information Technology, Uppsala University, 751 05, Uppsala, Sweden
| | - Mette A Pedersen
- Department of Nuclear Medicine & PET-Centre, Aarhus University Hospital, 8200, Aarhus N, Denmark
| | - Daniel Molin
- Department of Immunology, Genetics and Pathology, Uppsala University, 751 85, Uppsala, Sweden
| | - Håkan Ahlström
- Section of Radiology, Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden
- Antaros Medical AB, BioVenture Hub, 431 53, Mölndal, Sweden
| | - Joel Kullberg
- Section of Radiology, Department of Surgical Sciences, Uppsala University, 751 85, Uppsala, Sweden
- Antaros Medical AB, BioVenture Hub, 431 53, Mölndal, Sweden
| |
Collapse
|
61
|
Zhu F, Wang S, Li D, Li Q. Similarity attention-based CNN for robust 3D medical image registration. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.104403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
62
|
Casamitjana A, Iglesias JE. High-resolution atlasing and segmentation of the subcortex: Review and perspective on challenges and opportunities created by machine learning. Neuroimage 2022; 263:119616. [PMID: 36084858 PMCID: PMC11534291 DOI: 10.1016/j.neuroimage.2022.119616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 08/30/2022] [Accepted: 09/05/2022] [Indexed: 11/17/2022] Open
Abstract
This paper reviews almost three decades of work on atlasing and segmentation methods for subcortical structures in human brain MRI. In writing this survey, we have three distinct aims. First, to document the evolution of digital subcortical atlases of the human brain, from the early MRI templates published in the nineties, to the complex multi-modal atlases at the subregion level that are available today. Second, to provide a detailed record of related efforts in the automated segmentation front, from earlier atlas-based methods to modern machine learning approaches. And third, to present a perspective on the future of high-resolution atlasing and segmentation of subcortical structures in in vivo human brain MRI, including open challenges and opportunities created by recent developments in machine learning.
Collapse
Affiliation(s)
- Adrià Casamitjana
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK.
| | - Juan Eugenio Iglesias
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK; Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, USA
| |
Collapse
|
63
|
Kihara S, Koike Y, Takegawa H, Anetai Y, Nakamura S, Tanigawa N, Koizumi M. Clinical target volume segmentation based on gross tumor volume using deep learning for head and neck cancer treatment. Med Dosim 2022; 48:20-24. [PMID: 36273950 DOI: 10.1016/j.meddos.2022.09.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 02/07/2022] [Accepted: 09/17/2022] [Indexed: 02/04/2023]
Abstract
Accurate clinical target volume (CTV) delineation is important for head and neck intensity-modulated radiation therapy. However, delineation is time-consuming and susceptible to interobserver variability (IOV). Based on a manual contouring process commonly used in clinical practice, we developed a deep learning (DL)-based method to delineate a low-risk CTV with computed tomography (CT) and gross tumor volume (GTV) input and compared it with a CT-only input. A total of 310 patients with oropharynx cancer were randomly divided into the training set (250) and test set (60). The low-risk CTV and primary GTV contours were used to generate label data for the input and ground truth. A 3D U-Net with a two-channel input of CT and GTV (U-NetGTV) was proposed and its performance was compared with a U-Net with only CT input (U-NetCT). The Dice similarity coefficient (DSC) and average Hausdorff distance (AHD) were evaluated. The time required to predict the CTV was 0.86 s per patient. U-NetGTV showed a significantly higher mean DSC value than U-NetCT (0.80 ± 0.03 and 0.76 ± 0.05) and a significantly lower mean AHD value (3.0 ± 0.5 mm vs 3.5 ± 0.7 mm). Compared to the existing DL method with only CT input, the proposed GTV-based segmentation using DL showed a more precise low-risk CTV segmentation for head and neck cancer. Our findings suggest that the proposed method could reduce the contouring time of a low-risk CTV, allowing the standardization of target delineations for head and neck cancer.
Collapse
Affiliation(s)
- Sayaka Kihara
- Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Yuhei Koike
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan.
| | - Hideki Takegawa
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Yusuke Anetai
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Satoaki Nakamura
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Noboru Tanigawa
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Masahiko Koizumi
- Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
64
|
Schevenels K, Michiels L, Lemmens R, De Smedt B, Zink I, Vandermosten M. The role of the hippocampus in statistical learning and language recovery in persons with post stroke aphasia. Neuroimage Clin 2022; 36:103243. [PMID: 36306718 PMCID: PMC9668653 DOI: 10.1016/j.nicl.2022.103243] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 10/17/2022] [Accepted: 10/19/2022] [Indexed: 11/11/2022]
Abstract
Although several studies have aimed for accurate predictions of language recovery in post stroke aphasia, individual language outcomes remain hard to predict. Large-scale prediction models are built using data from patients mainly in the chronic phase after stroke, although it is clinically more relevant to consider data from the acute phase. Previous research has mainly focused on deficits, i.e., behavioral deficits or specific brain damage, rather than compensatory mechanisms, i.e., intact cognitive skills or undamaged brain regions. One such unexplored brain region that might support language (re)learning in aphasia is the hippocampus, a region that has commonly been associated with an individual's learning potential, including statistical learning. This refers to a set of mechanisms upon which we rely heavily in daily life to learn a range of regularities across cognitive domains. Against this background, thirty-three patients with aphasia (22 males and 11 females, M = 69.76 years, SD = 10.57 years) were followed for 1 year in the acute (1-2 weeks), subacute (3-6 months) and chronic phase (9-12 months) post stroke. We evaluated the unique predictive value of early structural hippocampal measures for short-term and long-term language outcomes (measured by the ANELT). In addition, we investigated whether statistical learning abilities were intact in patients with aphasia using three different tasks: an auditory-linguistic and visual task based on the computation of transitional probabilities and a visuomotor serial reaction time task. Finally, we examined the association of individuals' statistical learning potential with acute measures of hippocampal gray and white matter. Using Bayesian statistics, we found moderate evidence for the contribution of left hippocampal gray matter in the acute phase to the prediction of long-term language outcomes, over and above information on the lesion and the initial language deficit (measured by the ScreeLing). Non-linguistic statistical learning in patients with aphasia, measured in the subacute phase, was intact at the group level compared to 23 healthy older controls (8 males and 15 females, M = 74.09 years, SD = 6.76 years). Visuomotor statistical learning correlated with acute hippocampal gray and white matter. These findings reveal that particularly left hippocampal gray matter in the acute phase is a potential marker of language recovery after stroke, possibly through its statistical learning ability.
Collapse
Affiliation(s)
- Klara Schevenels
- Research Group Experimental Oto-Rhino-Laryngology, Department of Neurosciences, KU Leuven, Onderwijs en Navorsing 2 (O&N2), Herestraat 49 box 721, Leuven 3000, Belgium; Leuven Brain Institute, KU Leuven, Onderwijs en Navorsing 5 (O&N 5), Herestraat 49 box 1020, Leuven 3000, Belgium.
| | - Laura Michiels
- Department of Neurology, University Hospitals Leuven, Herestraat 49, Leuven 3000, Belgium; Research Group Experimental Neurology, Department of Neurosciences, KU Leuven, Herestraat 49 box 7003, Leuven 3000, Belgium; Laboratory of Neurobiology, VIB Center for Brain & Disease Research, Onderwijs en Navorsing 5 (O&N 5), Herestraat 49 box 602, Leuven 3000, Belgium; Leuven Brain Institute, KU Leuven, Onderwijs en Navorsing 5 (O&N 5), Herestraat 49 box 1020, Leuven 3000, Belgium.
| | - Robin Lemmens
- Department of Neurology, University Hospitals Leuven, Herestraat 49, Leuven 3000, Belgium; Research Group Experimental Neurology, Department of Neurosciences, KU Leuven, Herestraat 49 box 7003, Leuven 3000, Belgium; Laboratory of Neurobiology, VIB Center for Brain & Disease Research, Onderwijs en Navorsing 5 (O&N 5), Herestraat 49 box 602, Leuven 3000, Belgium; Leuven Brain Institute, KU Leuven, Onderwijs en Navorsing 5 (O&N 5), Herestraat 49 box 1020, Leuven 3000, Belgium.
| | - Bert De Smedt
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU leuven, Leopold Vanderkelenstraat 32 box 3765, Leuven 3000, Belgium; Leuven Brain Institute, KU Leuven, Onderwijs en Navorsing 5 (O&N 5), Herestraat 49 box 1020, Leuven 3000, Belgium.
| | - Inge Zink
- Research Group Experimental Oto-Rhino-Laryngology, Department of Neurosciences, KU Leuven, Onderwijs en Navorsing 2 (O&N2), Herestraat 49 box 721, Leuven 3000, Belgium; Leuven Brain Institute, KU Leuven, Onderwijs en Navorsing 5 (O&N 5), Herestraat 49 box 1020, Leuven 3000, Belgium.
| | - Maaike Vandermosten
- Research Group Experimental Oto-Rhino-Laryngology, Department of Neurosciences, KU Leuven, Onderwijs en Navorsing 2 (O&N2), Herestraat 49 box 721, Leuven 3000, Belgium; Leuven Brain Institute, KU Leuven, Onderwijs en Navorsing 5 (O&N 5), Herestraat 49 box 1020, Leuven 3000, Belgium.
| |
Collapse
|
65
|
Makrogiannis S, Okorie A, Di Iorio A, Bandinelli S, Ferrucci L. Multi-atlas segmentation and quantification of muscle, bone and subcutaneous adipose tissue in the lower leg using peripheral quantitative computed tomography. Front Physiol 2022; 13:951368. [PMID: 36311235 PMCID: PMC9614313 DOI: 10.3389/fphys.2022.951368] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 09/26/2022] [Indexed: 11/26/2022] Open
Abstract
Accurate and reproducible tissue identification is essential for understanding structural and functional changes that may occur naturally with aging, or because of a chronic disease, or in response to intervention therapies. Peripheral quantitative computed tomography (pQCT) is regularly employed for body composition studies, especially for the structural and material properties of the bone. Furthermore, pQCT acquisition requires low radiation dose and the scanner is compact and portable. However, pQCT scans have limited spatial resolution and moderate SNR. pQCT image quality is frequently degraded by involuntary subject movement during image acquisition. These limitations may often compromise the accuracy of tissue quantification, and emphasize the need for automated and robust quantification methods. We propose a tissue identification and quantification methodology that addresses image quality limitations and artifacts, with increased interest in subject movement. We introduce a multi-atlas image segmentation (MAIS) framework for semantic segmentation of hard and soft tissues in pQCT scans at multiple levels of the lower leg. We describe the stages of statistical atlas generation, deformable registration and multi-tissue classifier fusion. We evaluated the performance of our methodology using multiple deformable registration approaches against reference tissue masks. We also evaluated the performance of conventional model-based segmentation against the same reference data to facilitate comparisons. We studied the effect of subject movement on tissue segmentation quality. We also applied the top performing method to a larger out-of-sample dataset and report the quantification results. The results show that multi-atlas image segmentation with diffeomorphic deformation and probabilistic label fusion produces very good quality over all tissues, even for scans with significant quality degradation. The application of our technique to the larger dataset reveals trends of age-related body composition changes that are consistent with the literature. Because of its robustness to subject motion artifacts, our MAIS methodology enables analysis of larger number of scans than conventional state-of-the-art methods. Automated analysis of both soft and hard tissues in pQCT is another contribution of this work.
Collapse
Affiliation(s)
- Sokratis Makrogiannis
- Math Imaging and Visual Computing Lab, Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, DE, United States
- *Correspondence: Sokratis Makrogiannis,
| | - Azubuike Okorie
- Math Imaging and Visual Computing Lab, Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, DE, United States
| | - Angelo Di Iorio
- Antalgic Mini-invasive and Rehab-Outpatients Unit, Department of Innovative Technologies in Medicine & Dentistry, University “G.d’Annunzio”, Chieti-Pescara, Italy
| | | | - Luigi Ferrucci
- National Institute on Aging, National Institutes of Health, Baltimore, MD, United States
| |
Collapse
|
66
|
Ma J, Zhang Y, Gu S, Zhu C, Ge C, Zhang Y, An X, Wang C, Wang Q, Liu X, Cao S, Zhang Q, Liu S, Wang Y, Li Y, He J, Yang X. AbdomenCT-1K: Is Abdominal Organ Segmentation a Solved Problem? IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:6695-6714. [PMID: 34314356 DOI: 10.1109/tpami.2021.3100536] [Citation(s) in RCA: 58] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
With the unprecedented developments in deep learning, automatic segmentation of main abdominal organs seems to be a solved problem as state-of-the-art (SOTA) methods have achieved comparable results with inter-rater variability on many benchmark datasets. However, most of the existing abdominal datasets only contain single-center, single-phase, single-vendor, or single-disease cases, and it is unclear whether the excellent performance can generalize on diverse datasets. This paper presents a large and diverse abdominal CT organ segmentation dataset, termed AbdomenCT-1K, with more than 1000 (1K) CT scans from 12 medical centers, including multi-phase, multi-vendor, and multi-disease cases. Furthermore, we conduct a large-scale study for liver, kidney, spleen, and pancreas segmentation and reveal the unsolved segmentation problems of the SOTA methods, such as the limited generalization ability on distinct medical centers, phases, and unseen diseases. To advance the unsolved problems, we further build four organ segmentation benchmarks for fully supervised, semi-supervised, weakly supervised, and continual learning, which are currently challenging and active research topics. Accordingly, we develop a simple and effective method for each benchmark, which can be used as out-of-the-box methods and strong baselines. We believe the AbdomenCT-1K dataset will promote future in-depth research towards clinical applicable abdominal organ segmentation methods.
Collapse
|
67
|
Atzeni A, Peter L, Robinson E, Blackburn E, Althonayan J, Alexander DC, Iglesias JE. Deep active learning for suggestive segmentation of biomedical image stacks via optimisation of Dice scores and traced boundary length. Med Image Anal 2022; 81:102549. [PMID: 36113320 PMCID: PMC11605667 DOI: 10.1016/j.media.2022.102549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 07/12/2022] [Accepted: 07/13/2022] [Indexed: 10/16/2022]
Abstract
Manual segmentation of stacks of 2D biomedical images (e.g., histology) is a time-consuming task which can be sped up with semi-automated techniques. In this article, we present a suggestive deep active learning framework that seeks to minimise the annotation effort required to achieve a certain level of accuracy when labelling such a stack. The framework suggests, at every iteration, a specific region of interest (ROI) in one of the images for manual delineation. Using a deep segmentation neural network and a mixed cross-entropy loss function, we propose a principled strategy to estimate class probabilities for the whole stack, conditioned on heterogeneous partial segmentations of the 2D images, as well as on weak supervision in the form of image indices that bound each ROI. Using the estimated probabilities, we propose a novel active learning criterion based on predictions for the estimated segmentation performance and delineation effort, measured with average Dice scores and total delineated boundary length, respectively, rather than common surrogates such as entropy. The query strategy suggests the ROI that is expected to maximise the ratio between performance and effort, while considering the adjacency of structures that may have already been labelled - which decrease the length of the boundary to trace. We provide quantitative results on synthetically deformed MRI scans and real histological data, showing that our framework can reduce labelling effort by up to 60-70% without compromising accuracy.
Collapse
Affiliation(s)
- Alessia Atzeni
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College, London, UK.
| | - Loic Peter
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College, London, UK
| | - Eleanor Robinson
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College, London, UK
| | - Emily Blackburn
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College, London, UK
| | - Juri Althonayan
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College, London, UK
| | - Daniel C Alexander
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College, London, UK
| | - Juan Eugenio Iglesias
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College, London, UK; Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, USA
| |
Collapse
|
68
|
Deep learning models and traditional automated techniques for brain tumor segmentation in MRI: a review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10245-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
69
|
Advances and Innovations in Ablative Head and Neck Oncologic Surgery Using Mixed Reality Technologies in Personalized Medicine. J Clin Med 2022; 11:jcm11164767. [PMID: 36013006 PMCID: PMC9410374 DOI: 10.3390/jcm11164767] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 08/10/2022] [Accepted: 08/12/2022] [Indexed: 11/17/2022] Open
Abstract
The benefit of computer-assisted planning in head and neck ablative and reconstructive surgery has been extensively documented over the last decade. This approach has been proven to offer a more secure surgical procedure. In the treatment of cancer of the head and neck, computer-assisted surgery can be used to visualize and estimate the location and extent of the tumor mass. Nowadays, some software tools even allow the visualization of the structures of interest in a mixed reality environment. However, the precise integration of mixed reality systems into a daily clinical routine is still a challenge. To date, this technology is not yet fully integrated into clinical settings such as the tumor board, surgical planning for head and neck tumors, or medical and surgical education. As a consequence, the handling of these systems is still of an experimental nature, and decision-making based on the presented data is not yet widely used. The aim of this paper is to present a novel, user-friendly 3D planning and mixed reality software and its potential application for ablative and reconstructive head and neck surgery.
Collapse
|
70
|
Li Y, Qiu Z, Fan X, Liu X, Chang EIC, Xu Y. Integrated 3d flow-based multi-atlas brain structure segmentation. PLoS One 2022; 17:e0270339. [PMID: 35969596 PMCID: PMC9377636 DOI: 10.1371/journal.pone.0270339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 06/09/2022] [Indexed: 11/18/2022] Open
Abstract
MRI brain structure segmentation plays an important role in neuroimaging studies. Existing methods either spend much CPU time, require considerable annotated data, or fail in segmenting volumes with large deformation. In this paper, we develop a novel multi-atlas-based algorithm for 3D MRI brain structure segmentation. It consists of three modules: registration, atlas selection and label fusion. Both registration and label fusion leverage an integrated flow based on grayscale and SIFT features. We introduce an effective and efficient strategy for atlas selection by employing the accompanying energy generated in the registration step. A 3D sequential belief propagation method and a 3D coarse-to-fine flow matching approach are developed in both registration and label fusion modules. The proposed method is evaluated on five public datasets. The results show that it has the best performance in almost all the settings compared to competitive methods such as ANTs, Elastix, Learning to Rank and Joint Label Fusion. Moreover, our registration method is more than 7 times as efficient as that of ANTs SyN, while our label transfer method is 18 times faster than Joint Label Fusion in CPU time. The results on the ADNI dataset demonstrate that our method is applicable to image pairs that require a significant transformation in registration. The performance on a composite dataset suggests that our method succeeds in a cross-modality manner. The results of this study show that the integrated 3D flow-based method is effective and efficient for brain structure segmentation. It also demonstrates the power of SIFT features, multi-atlas segmentation and classical machine learning algorithms for a medical image analysis task. The experimental results on public datasets show the proposed method's potential for general applicability in various brain structures and settings.
Collapse
Affiliation(s)
- Yeshu Li
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Ziming Qiu
- Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, United States of America
| | - Xingyu Fan
- Bioengineering College, Chongqing University, Chongqing, China
| | - Xianglong Liu
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | | | - Yan Xu
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China
- Microsoft Research, Beijing, China
| |
Collapse
|
71
|
Wang J, Chen Y, Xie H, Luo L, Tang Q. Evaluation of auto-segmentation for EBRT planning structures using deep learning-based workflow on cervical cancer. Sci Rep 2022; 12:13650. [PMID: 35953516 PMCID: PMC9372087 DOI: 10.1038/s41598-022-18084-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 08/04/2022] [Indexed: 11/12/2022] Open
Abstract
Deep learning (DL) based approach aims to construct a full workflow solution for cervical cancer with external beam radiation therapy (EBRT) and brachytherapy (BT). The purpose of this study was to evaluate the accuracy of EBRT planning structures derived from DL based auto-segmentation compared with standard manual delineation. Auto-segmentation model based on convolutional neural networks (CNN) was developed to delineate clinical target volumes (CTVs) and organs at risk (OARs) in cervical cancer radiotherapy. A total of 300 retrospective patients from multiple cancer centers were used to train and validate the model, and 75 independent cases were selected as testing data. The accuracy of auto-segmented contours were evaluated using geometric and dosimetric metrics including dice similarity coefficient (DSC), 95% hausdorff distance (95%HD), jaccard coefficient (JC) and dose-volume index (DVI). The correlation between geometric metrics and dosimetric difference was performed by Spearman’s correlation analysis. The right and left kidney, bladder, right and left femoral head showed superior geometric accuracy (DSC: 0.88–0.93; 95%HD: 1.03 mm–2.96 mm; JC: 0.78–0.88), and the Bland–Altman test obtained dose agreement for these contours (P > 0.05) between manual and DL based methods. Wilcoxon’s signed-rank test indicated significant dosimetric differences in CTV, spinal cord and pelvic bone (P < 0.001). A strong correlation between the mean dose of pelvic bone and its 95%HD (R = 0.843, P < 0.001) was found in Spearman’s correlation analysis, and the remaining structures showed weak link between dosimetric difference and all of geometric metrics. Our auto-segmentation achieved a satisfied agreement for most EBRT planning structures, although the clinical acceptance of CTV was a concern. DL based auto-segmentation was an essential component in cervical cancer workflow which would generate the accurate contouring.
Collapse
Affiliation(s)
- Jiahao Wang
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, Zhejiang, China
| | - Yuanyuan Chen
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, Zhejiang, China
| | - Hongling Xie
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, Zhejiang, China
| | - Lumeng Luo
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, Zhejiang, China
| | - Qiu Tang
- Department of Radiation Oncology, Women's Hospital, School of Medicine, Zhejiang University, Hangzhou, 310006, Zhejiang, China.
| |
Collapse
|
72
|
Montenegro JTP, Seguin D, Duerden EG. Joint attention in infants at high familial risk for autism spectrum disorder and the association with thalamic and hippocampal macrostructure. Cereb Cortex Commun 2022; 3:tgac029. [PMID: 36072708 PMCID: PMC9441013 DOI: 10.1093/texcom/tgac029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 06/30/2022] [Accepted: 07/05/2022] [Indexed: 11/12/2022] Open
Abstract
Abstract
Autism spectrum disorder (ASD) is a heritable neurodevelopmental disorder. Infants diagnosed with ASD can show impairments in spontaneous gaze-following and will seldom engage in joint attention (JA). The ability to initiate JA (IJA) can be more significantly impaired than the ability to respond to JA (RJA). In a longitudinal study, 101 infants who had a familial risk for ASD were enrolled (62% males). Participants completed magnetic resonance imaging scans at 4 or 6 months of age. Subcortical volumes (thalamus, hippocampus, amygdala, basal ganglia, ventral diencephalon, and cerebellum) were automatically extracted. Early gaze and JA behaviors were assessed with standardized measures. The majority of infants were IJA nonresponders (n = 93, 92%), and over half were RJA nonresponders (n = 50, 52%). In the nonresponder groups, models testing the association of subcortical volumes with later ASD diagnosis accounted for age, sex, and cerebral volumes. In the nonresponder IJA group, using regression method, the left hippocampus (B = −0.009, aOR = 0.991, P = 0.025), the right thalamus (B = −0.016, aOR = 0.984, P = 0.026), as well as the left thalamus (B = 0.015, aOR = 1.015, P = 0.019), predicted later ASD diagnosis. Alterations in thalamic and hippocampal macrostructure in at-risk infants who do not engage in IJA may reflect an enhanced vulnerability and may be the key predictors of later ASD development.
Collapse
Affiliation(s)
- Julia T P Montenegro
- Applied Psychology , Faculty of Education, , London, Ontario N6G1G7, Canada
- Western University, Faculty of Education Building 1137 Western Road , Faculty of Education, , London, Ontario N6G1G7, Canada
| | - Diane Seguin
- Applied Psychology , Faculty of Education, , London, Ontario N6G1G7 , Canada
- Western University, Faculty of Education Building 1137 Western Road , Faculty of Education, , London, Ontario N6G1G7 , Canada
- Physiology & Pharmacology , Schulich School of Medicine and Dentistry, , Medical Science Building, Room 216 1151 Richmond St, London, Ontario N6A5C1 , Canada
- Western University , Schulich School of Medicine and Dentistry, , Medical Science Building, Room 216 1151 Richmond St, London, Ontario N6A5C1 , Canada
| | - Emma G Duerden
- Applied Psychology , Faculty of Education, , Faculty of Education Building 1137 Western Road, London, Ontario N6G1G7 , Canada
- Western University , Faculty of Education, , Faculty of Education Building 1137 Western Road, London, Ontario N6G1G7 , Canada
- Western Institute for Neuroscience, Western University, The Brain and Mind Institute Western Interdisciplinary Research Building , Room 3190 1151 Richmond St, London, Ontario N6A3K7 , Canada
- Biomedical Engineering , Faculty of Engineering, , Amit Chakma Engineering Building, Room 2405 1151 Richmond St, London, Ontario N6A3K7 , Canada
- Western University , Faculty of Engineering, , Amit Chakma Engineering Building, Room 2405 1151 Richmond St, London, Ontario N6A3K7 , Canada
- Psychiatry , Schulich School of Medicine and Dentistry, , Parkwood Institute Mental Health Care Building, F4-430, London, Ontario N6C0A7 , Canada
- University of Western Ontario , Schulich School of Medicine and Dentistry, , Parkwood Institute Mental Health Care Building, F4-430, London, Ontario N6C0A7 , Canada
| |
Collapse
|
73
|
Motegi K, Miyaji N, Yamashita K, Koizumi M, Terauchi T. Comparison of skeletal segmentation by deep learning-based and atlas-based segmentation in prostate cancer patients. Ann Nucl Med 2022; 36:834-841. [PMID: 35773557 DOI: 10.1007/s12149-022-01763-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 06/09/2022] [Indexed: 11/29/2022]
Abstract
OBJECTIVE We aimed to compare the deep learning-based (VSBONE BSI) and atlas-based (BONENAVI) segmentation accuracy that have been developed to measure the bone scan index based on skeletal segmentation. METHODS We retrospectively conducted bone scans for 383 patients with prostate cancer. These patients were divided into two groups: 208 patients were injected with 99mTc-hydroxymethylene diphosphonate processed by VSBONE BSI, and 175 patients were injected with 99mTc-methylene diphosphonate processed by BONENAVI. Three observers classified the skeletal segmentations as either a "Match" or "Mismatch" in the following regions: the skull, cervical vertebrae, thoracic vertebrae, lumbar vertebrae, pelvis, sacrum, humerus, rib, sternum, clavicle, scapula, and femur. Segmentation error was defined if two or more observers selected "Mismatch" in the same region. We calculated the segmentation error rate according to each administration group and evaluated the presence of hot spots suspected bone metastases in "Mismatch" regions. Multivariate logistic regression analysis was used to determine the association between segmentation error and variables like age, uptake time, total counts, extent of disease, and gamma cameras. RESULTS The regions of "Mismatch" were more common in the long tube bones for VSBONE BSI and in the pelvis and axial skeletons for BONENAVI. Segmentation error was observed in 49 cases (23.6%) with VSBONE BSI and 58 cases (33.1%) with BONENAVI. VSBONE BSI tended that "Mismatch" regions contained hot spots suspected of bone metastases in patients with multiple bone metastases and showed that patients with higher extent of disease (odds ratio = 8.34) were associated with segmentation error in multivariate logistic regression analysis. CONCLUSIONS VSBONE BSI has a potential to be higher segmentation accuracy compared with BONENAVI. However, the segmentation error in VSBONE BSI occurred dependent on bone metastases burden. We need to be careful when evaluating multiple bone metastases using VSBONE BSI.
Collapse
Affiliation(s)
- Kazuki Motegi
- Department of Nuclear Medicine, Cancer Institute Hospital, Japanese Foundation for Cancer Research, 3-8-31, Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Noriaki Miyaji
- Department of Nuclear Medicine, Cancer Institute Hospital, Japanese Foundation for Cancer Research, 3-8-31, Ariake, Koto-ku, Tokyo, 135-8550, Japan.
| | - Kosuke Yamashita
- Department of Nuclear Medicine, Cancer Institute Hospital, Japanese Foundation for Cancer Research, 3-8-31, Ariake, Koto-ku, Tokyo, 135-8550, Japan.,Graduate School of Health Sciences, Kumamoto University, 2-39-1, Kuroge, Chuo-ku, Kumamoto City, Kumamoto, 860-0862, Japan
| | - Mitsuru Koizumi
- Department of Nuclear Medicine, Cancer Institute Hospital, Japanese Foundation for Cancer Research, 3-8-31, Ariake, Koto-ku, Tokyo, 135-8550, Japan
| | - Takashi Terauchi
- Department of Nuclear Medicine, Cancer Institute Hospital, Japanese Foundation for Cancer Research, 3-8-31, Ariake, Koto-ku, Tokyo, 135-8550, Japan
| |
Collapse
|
74
|
Kaplan S, Perrone A, Alexopoulos D, Kenley JK, Barch DM, Buss C, Elison JT, Graham AM, Neil JJ, O'Connor TG, Rasmussen JM, Rosenberg MD, Rogers CE, Sotiras A, Fair DA, Smyser CD. Synthesizing pseudo-T2w images to recapture missing data in neonatal neuroimaging with applications in rs-fMRI. Neuroimage 2022; 253:119091. [PMID: 35288282 PMCID: PMC9127394 DOI: 10.1016/j.neuroimage.2022.119091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 02/09/2022] [Accepted: 03/10/2022] [Indexed: 11/18/2022] Open
Abstract
T1- and T2-weighted (T1w and T2w) images are essential for tissue classification and anatomical localization in Magnetic Resonance Imaging (MRI) analyses. However, these anatomical data can be challenging to acquire in non-sedated neonatal cohorts, which are prone to high amplitude movement and display lower tissue contrast than adults. As a result, one of these modalities may be missing or of such poor quality that they cannot be used for accurate image processing, resulting in subject loss. While recent literature attempts to overcome these issues in adult populations using synthetic imaging approaches, evaluation of the efficacy of these methods in pediatric populations and the impact of these techniques in conventional MR analyses has not been performed. In this work, we present two novel methods to generate pseudo-T2w images: the first is based in deep learning and expands upon previous models to 3D imaging without the requirement of paired data, the second is based in nonlinear multi-atlas registration providing a computationally lightweight alternative. We demonstrate the anatomical accuracy of pseudo-T2w images and their efficacy in existing MR processing pipelines in two independent neonatal cohorts. Critically, we show that implementing these pseudo-T2w methods in resting-state functional MRI analyses produces virtually identical functional connectivity results when compared to those resulting from T2w images, confirming their utility in infant MRI studies for salvaging otherwise lost subject data.
Collapse
Affiliation(s)
- Sydney Kaplan
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, United States.
| | - Anders Perrone
- Department of Pediatrics and the Masonic Institute for the Developing Brain, Institute of Child Development, University of Minnesota, Minneapolis, MN, United States; Department of Psychiatry, Oregon Health and Science University, Portland, OR, United States
| | - Dimitrios Alexopoulos
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, United States
| | - Jeanette K Kenley
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, United States
| | - Deanna M Barch
- Department of Radiology and Institute for Informatics, Washington University School of Medicine, St. Louis, MO, United States; Department of Psychological and Brain Sciences, Washington University School of Medicine, St. Louis, MO, United States; Department of Psychiatry, Washington University School of Medicine, St. Louis, MO, United States
| | - Claudia Buss
- Department of Pediatrics, University of California Irvine, Irvine, CA, United States; Department of Medical Psychology, Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität Zu Berlin, Augustenburger Platz 1, 13353, Berlin
| | - Jed T Elison
- Department of Pediatrics and the Masonic Institute for the Developing Brain, Institute of Child Development, University of Minnesota, Minneapolis, MN, United States
| | - Alice M Graham
- Department of Psychiatry, Oregon Health and Science University, Portland, OR, United States
| | - Jeffrey J Neil
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, United States; Department of Pediatrics, Washington University School of Medicine, St. Louis, MO, United States
| | - Thomas G O'Connor
- Department of Psychiatry, University of Rochester, Rochester, NY, United States
| | - Jerod M Rasmussen
- Department of Pediatrics, University of California Irvine, Irvine, CA, United States
| | - Monica D Rosenberg
- Department of Psychology, University of Chicago, Chicago, IL, United States
| | - Cynthia E Rogers
- Department of Pediatrics, Washington University School of Medicine, St. Louis, MO, United States; Department of Psychiatry, Washington University School of Medicine, St. Louis, MO, United States
| | - Aristeidis Sotiras
- Department of Radiology and Institute for Informatics, Washington University School of Medicine, St. Louis, MO, United States
| | - Damien A Fair
- Department of Pediatrics and the Masonic Institute for the Developing Brain, Institute of Child Development, University of Minnesota, Minneapolis, MN, United States
| | - Christopher D Smyser
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, United States; Department of Radiology and Institute for Informatics, Washington University School of Medicine, St. Louis, MO, United States; Department of Pediatrics, Washington University School of Medicine, St. Louis, MO, United States
| |
Collapse
|
75
|
Rao D, K P, Singh R, J V. Automated segmentation of the larynx on computed tomography images: a review. Biomed Eng Lett 2022; 12:175-183. [PMID: 35529346 PMCID: PMC9046475 DOI: 10.1007/s13534-022-00221-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 01/29/2022] [Accepted: 02/15/2022] [Indexed: 11/03/2022] Open
Abstract
The larynx, or the voice-box, is a common site of occurrence of Head and Neck cancers. Yet, automated segmentation of the larynx has been receiving very little attention. Segmentation of organs is an essential step in cancer treatment-planning. Computed Tomography scans are routinely used to assess the extent of tumor spread in the Head and Neck as they are fast to acquire and tolerant to some movement. This paper reviews various automated detection and segmentation methods used for the larynx on Computed Tomography images. Image registration and deep learning approaches to segmenting the laryngeal anatomy are compared, highlighting their strengths and shortcomings. A list of available annotated laryngeal computed tomography datasets is compiled for encouraging further research. Commercial software currently available for larynx contouring are briefed in our work. We conclude that the lack of standardisation on larynx boundaries and the complexity of the relatively small structure makes automated segmentation of the larynx on computed tomography images a challenge. Reliable computer aided intervention in the contouring and segmentation process will help clinicians easily verify their findings and look for oversight in diagnosis. This review is useful for research that works with artificial intelligence in Head and Neck cancer, specifically that deals with the segmentation of laryngeal anatomy. Supplementary Information The online version contains supplementary material available at 10.1007/s13534-022-00221-3.
Collapse
Affiliation(s)
- Divya Rao
- Department of Information and Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, 576104 Manipal, India
- Department of Otorhinolaryngology, Kasturba Medical College, Manipal Academy of Higher Education, 576104 Manipal, India
| | - Prakashini K
- Department of Radiodiagnosis and Imaging, Kasturba Medical College, Manipal Academy of Higher Education, 576104 Manipal, India
| | - Rohit Singh
- Department of Otorhinolaryngology, Kasturba Medical College, Manipal Academy of Higher Education, 576104 Manipal, India
| | - Vijayananda J
- Data Science and Artificial Intelligence, 560045 Philips, Bangalore, India
| |
Collapse
|
76
|
Panfilov E, Tiulpin A, Nieminen MT, Saarakkala S, Casula V. Deep learning-based segmentation of knee MRI for fully automatic subregional morphological assessment of cartilage tissues: Data from the Osteoarthritis Initiative. J Orthop Res 2022; 40:1113-1124. [PMID: 34324223 DOI: 10.1002/jor.25150] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 06/14/2021] [Accepted: 07/13/2021] [Indexed: 02/04/2023]
Abstract
Morphological changes in knee cartilage subregions are valuable imaging-based biomarkers for understanding progression of osteoarthritis, and they are typically detected from magnetic resonance imaging (MRI). So far, accurate segmentation of cartilage has been done manually. Deep learning approaches show high promise in automating the task; however, they lack clinically relevant evaluation. We introduce a fully automatic method for segmentation and subregional assessment of articular cartilage, and evaluate its predictive power in context of radiographic osteoarthritis progression. Two data sets of 3D double-echo steady-state (DESS) MRI derived from the Osteoarthritis Initiative were used: first, n = 88; second, n = 600, 0-/12-/24-month visits. Our method performed deep learning-based segmentation of knee cartilage tissues, their subregional division via multi-atlas registration, and extraction of subregional volume and thickness. The segmentation model was developed and assessed on the first data set. Subsequently, on the second data set, the morphological measurements from our and the prior methods were analyzed in correlation and agreement, and, eventually, by their discriminative power of radiographic osteoarthritis progression over 12 and 24 months, retrospectively. The segmentation model showed very high correlation (r > 0.934) and agreement (mean difference < 116 mm3 ) in volumetric measurements with the reference segmentations. Comparison of our and manual segmentation methods yielded r = 0.845-0.973 and mean differences = 262-501 mm3 for weight-bearing cartilage volume, and r = 0.770-0.962 and mean differences = 0.513-1.138 mm for subregional cartilage thickness. With regard to osteoarthritis progression, our method found most of the significant associations identified using the manual segmentation method, for both 12- and 24-month subregional cartilage changes. The method may be effectively applied in osteoarthritis progression studies to extract cartilage-related imaging biomarkers.
Collapse
Affiliation(s)
- Egor Panfilov
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| | - Aleksei Tiulpin
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland.,Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland.,Ailean Technologies Oy, Oulu, Finland
| | - Miika T Nieminen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland.,Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland
| | - Simo Saarakkala
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| | - Victor Casula
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| |
Collapse
|
77
|
Efficient Johnson-SB Mixture Model for Segmentation of CT Liver Image. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:5654424. [PMID: 35463693 PMCID: PMC9023182 DOI: 10.1155/2022/5654424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 02/07/2022] [Accepted: 03/09/2022] [Indexed: 11/24/2022]
Abstract
To overcome the problem that the traditional Gaussian mixture model (GMM) cannot well describe the skewness distribution of the gray-level histogram of a liver CT slice, we propose a novel segmentation method for liver CT images by introducing the Johnson-SB mixture model (JSBMM). The Johnson-SB model not only has a flexible asymmetrical distribution but also covers a variety of other distributions as well. In this article, the parameter optimization formulas for JSBMM were derived by employing the expectation-maximization (EM) algorithm and maximum likelihood. The implementation process of the JSBMM-based segmentation algorithm is provided in detail. To make better use of the skewness of Johnson-SB and improve the segmentation accuracy, we devise an idea to divide the histogram into two parts and calculate the segmentation threshold for each part, respectively, which is called JSBMM-TDH. By analyzing and comparing the segmentation thresholds with different cluster numbers, it is illustrated that the segmentation threshold of JSBMM-TDH will tend to be stable with the increasing of cluster number, while that of GMM is sensitive to different cluster numbers. The proposed JSBMM-TDH is applied to segment four randomly obtained abdominal CT image sequences, and the segmentation results and robustness have been compared between JSBMM-TDH and GMM. It is verified that JSBMM-TDH has preferable segmentation results and better robustness than GMM for the segmentation of liver CT images.
Collapse
|
78
|
Fan W, Sang Y, Zhou H, Xiao J, Fan Z, Ruan D. MRA-free intracranial vessel localization on MR vessel wall images. Sci Rep 2022; 12:6240. [PMID: 35422490 PMCID: PMC9010428 DOI: 10.1038/s41598-022-10256-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 03/31/2022] [Indexed: 11/08/2022] Open
Abstract
Analysis of vessel morphology is important in assessing intracranial atherosclerosis disease (ICAD). Recently, magnetic resonance (MR) vessel wall imaging (VWI) has been introduced to image ICAD and characterize morphology for atherosclerotic lesions. In order to automatically perform quantitative analysis on VWI data, MR angiography (MRA) acquired in the same imaging session is typically used to localize the vessel segments of interest. However, MRA may be unavailable caused by the lack or failure of the sequence in a VWI protocol. This study aims to investigate the feasibility to infer the vessel location directly from VWI. We propose to synergize an atlas-based method to preserve general vessel structure topology with a deep learning network in the motion field domain to correct the residual geometric error. Performance is quantified by examining the agreement between the extracted vessel structures from the pair-acquired and alignment-corrected angiogram, and the estimated output using a cross-validation scheme. Our proposed pipeline yields clinically feasible performance in localizing intracranial vessels, demonstrating the promise of performing vessel morphology analysis using VWI alone.
Collapse
Affiliation(s)
- Weijia Fan
- Department of Physics, University of California, Los Angeles, Los Angeles, CA, USA
| | - Yudi Sang
- Department of Bioengineering, University of California, Los Angeles, Los Angeles, CA, USA
| | - Hanyue Zhou
- Department of Bioengineering, University of California, Los Angeles, Los Angeles, CA, USA
| | - Jiayu Xiao
- Department of Radiology, University of Southern California, Los Angeles, CA, USA
| | - Zhaoyang Fan
- Department of Radiology, University of Southern California, Los Angeles, CA, USA
- Department of Radiation Oncology, University of Southern California, Los Angeles, CA, USA
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA
| | - Dan Ruan
- Department of Bioengineering, University of California, Los Angeles, Los Angeles, CA, USA.
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
79
|
Yang G, Dai Z, Zhang Y, Zhu L, Tan J, Chen Z, Zhang B, Cai C, He Q, Li F, Wang X, Yang W. Multiscale Local Enhancement Deep Convolutional Networks for the Automated 3D Segmentation of Gross Tumor Volumes in Nasopharyngeal Carcinoma: A Multi-Institutional Dataset Study. Front Oncol 2022; 12:827991. [PMID: 35387126 PMCID: PMC8979212 DOI: 10.3389/fonc.2022.827991] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 02/24/2022] [Indexed: 01/10/2023] Open
Abstract
Purpose Accurate segmentation of gross target volume (GTV) from computed tomography (CT) images is a prerequisite in radiotherapy for nasopharyngeal carcinoma (NPC). However, this task is very challenging due to the low contrast at the boundary of the tumor and the great variety of sizes and morphologies of tumors between different stages. Meanwhile, the data source also seriously affect the results of segmentation. In this paper, we propose a novel three-dimensional (3D) automatic segmentation algorithm that adopts cascaded multiscale local enhancement of convolutional neural networks (CNNs) and conduct experiments on multi-institutional datasets to address the above problems. Materials and Methods In this study, we retrospectively collected CT images of 257 NPC patients to test the performance of the proposed automatic segmentation model, and conducted experiments on two additional multi-institutional datasets. Our novel segmentation framework consists of three parts. First, the segmentation framework is based on a 3D Res-UNet backbone model that has excellent segmentation performance. Then, we adopt a multiscale dilated convolution block to enhance the receptive field and focus on the target area and boundary for segmentation improvement. Finally, a central localization cascade model for local enhancement is designed to concentrate on the GTV region for fine segmentation to improve the robustness. The Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), average symmetric surface distance (ASSD) and 95% Hausdorff distance (HD95) are utilized as qualitative evaluation criteria to estimate the performance of our automated segmentation algorithm. Results The experimental results show that compared with other state-of-the-art methods, our modified version 3D Res-UNet backbone has excellent performance and achieves the best results in terms of the quantitative metrics DSC, PPR, ASSD and HD95, which reached 74.49 ± 7.81%, 79.97 ± 13.90%, 1.49 ± 0.65 mm and 5.06 ± 3.30 mm, respectively. It should be noted that the receptive field enhancement mechanism and cascade architecture can have a great impact on the stable output of automatic segmentation results with high accuracy, which is critical for an algorithm. The final DSC, SEN, ASSD and HD95 values can be increased to 76.23 ± 6.45%, 79.14 ± 12.48%, 1.39 ± 5.44mm, 4.72 ± 3.04mm. In addition, the outcomes of multi-institution experiments demonstrate that our model is robust and generalizable and can achieve good performance through transfer learning. Conclusions The proposed algorithm could accurately segment NPC in CT images from multi-institutional datasets and thereby may improve and facilitate clinical applications.
Collapse
Affiliation(s)
- Geng Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Zhenhui Dai
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Lin Zhu
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Junwen Tan
- Department of Oncology, The Fourth Affiliated Hospital of Guangxi Medical University, Liuzhou, China
| | - Zefeiyun Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Bailin Zhang
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Chunya Cai
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Qiang He
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Fei Li
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xuetao Wang
- Department of Radiation Therapy, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| |
Collapse
|
80
|
Rivière D, Leprince Y, Labra N, Vindas N, Foubet O, Cagna B, Loh KK, Hopkins W, Balzeau A, Mancip M, Lebenberg J, Cointepas Y, Coulon O, Mangin JF. Browsing Multiple Subjects When the Atlas Adaptation Cannot Be Achieved via a Warping Strategy. Front Neuroinform 2022; 16:803934. [PMID: 35311005 PMCID: PMC8928460 DOI: 10.3389/fninf.2022.803934] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 01/17/2022] [Indexed: 11/14/2022] Open
Abstract
Brain mapping studies often need to identify brain structures or functional circuits into a set of individual brains. To this end, multiple atlases have been published to represent such structures based on different modalities, subject sets, and techniques. The mainstream approach to exploit these atlases consists in spatially deforming each individual data onto a given atlas using dense deformation fields, which supposes the existence of a continuous mapping between atlases and individuals. However, this continuity is not always verified, and this "iconic" approach has limits. We present in this study an alternative, complementary, "structural" approach, which consists in extracting structures from the individual data, and comparing them without deformation. A "structural atlas" is thus a collection of annotated individual data with a common structure nomenclature. It may be used to characterize structure shape variability across individuals or species, or to train machine learning systems. This study exhibits Anatomist, a powerful structural 3D visualization software dedicated to building, exploring, and editing structural atlases involving a large number of subjects. It has been developed primarily to decipher the cortical folding variability; cortical sulci vary enormously in both size and shape, and some may be missing or have various topologies, which makes iconic approaches inefficient to study them. We, therefore, had to build structural atlases for cortical sulci, and use them to train sulci identification algorithms. Anatomist can display multiple subject data in multiple views, supports all kinds of neuroimaging data, including compound structural object graphs, handles arbitrary coordinate transformation chains between data, and has multiple display features. It is designed as a programming library in both C++ and Python languages, and may be extended or used to build dedicated custom applications. Its generic design makes all the display and structural aspects used to explore the variability of the cortical folding pattern work in other applications, for instance, to browse axonal fiber bundles, deep nuclei, functional activations, or other kinds of cortical parcellations. Multimodal, multi-individual, or inter-species display is supported, and adaptations to large scale screen walls have been developed. These very original features make it a unique viewer for structural atlas browsing.
Collapse
Affiliation(s)
- Denis Rivière
- Université Paris-Saclay, CEA, CNRS UMR 9027, Baobab, NeuroSpin, Gif-sur-Yvette, France
| | - Yann Leprince
- Université Paris-Saclay, CEA, CNRS UMR 9027, Baobab, NeuroSpin, Gif-sur-Yvette, France
| | - Nicole Labra
- Université Paris-Saclay, CEA, CNRS UMR 9027, Baobab, NeuroSpin, Gif-sur-Yvette, France
- PaleoFED Team, UMR 7194, CNRS, Département Homme et Environnement, Muséum National d’Histoire Naturelle, Musée de l’Homme, Paris, France
| | - Nabil Vindas
- Université Paris-Saclay, CEA, CNRS UMR 9027, Baobab, NeuroSpin, Gif-sur-Yvette, France
| | - Ophélie Foubet
- Université Paris-Saclay, CEA, CNRS UMR 9027, Baobab, NeuroSpin, Gif-sur-Yvette, France
| | - Bastien Cagna
- Université Paris-Saclay, CEA, CNRS UMR 9027, Baobab, NeuroSpin, Gif-sur-Yvette, France
| | - Kep Kee Loh
- INT - Institut de Neurosciences de la Timone, Aix-Marseille Univ, CNRS UMR 7289, Marseille, France
| | - William Hopkins
- Department of Comparative Medicine, University of Texas MD Anderson Cancer Center, Bastrop, TX, United States
| | - Antoine Balzeau
- PaleoFED Team, UMR 7194, CNRS, Département Homme et Environnement, Muséum National d’Histoire Naturelle, Musée de l’Homme, Paris, France
- Department of African Zoology, Royal Museum for Central Africa, Tervuren, Belgium
| | - Martial Mancip
- Maison de la Simulation, CNRS, CEA Saclay, Gif-sur-Yvette, France
| | - Jessica Lebenberg
- Université Paris-Saclay, CEA, CNRS UMR 9027, Baobab, NeuroSpin, Gif-sur-Yvette, France
- Université de Paris, INSERM UMR 1141, NeuroDiderot, Paris, France
| | - Yann Cointepas
- Université Paris-Saclay, CEA, CNRS UMR 9027, Baobab, NeuroSpin, Gif-sur-Yvette, France
| | - Olivier Coulon
- INT - Institut de Neurosciences de la Timone, Aix-Marseille Univ, CNRS UMR 7289, Marseille, France
| | - Jean-François Mangin
- Université Paris-Saclay, CEA, CNRS UMR 9027, Baobab, NeuroSpin, Gif-sur-Yvette, France
| |
Collapse
|
81
|
Perosa V, Oltmer J, Munting LP, Freeze WM, Auger CA, Scherlek AA, van der Kouwe AJ, Iglesias JE, Atzeni A, Bacskai BJ, Viswanathan A, Frosch MP, Greenberg SM, van Veluw SJ. Perivascular space dilation is associated with vascular amyloid-β accumulation in the overlying cortex. Acta Neuropathol 2022; 143:331-348. [PMID: 34928427 PMCID: PMC9047512 DOI: 10.1007/s00401-021-02393-1] [Citation(s) in RCA: 68] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 11/10/2021] [Accepted: 12/02/2021] [Indexed: 12/14/2022]
Abstract
Perivascular spaces (PVS) are compartments surrounding cerebral blood vessels that become visible on MRI when enlarged. Enlarged PVS (EPVS) are commonly seen in patients with cerebral small vessel disease (CSVD) and have been suggested to reflect dysfunctional perivascular clearance of soluble waste products from the brain. In this study, we investigated histopathological correlates of EPVS and how they relate to vascular amyloid-β (Aβ) in cerebral amyloid angiopathy (CAA), a form of CSVD that commonly co-exists with Alzheimer's disease (AD) pathology. We used ex vivo MRI, semi-automatic segmentation and validated deep-learning-based models to quantify EPVS and associated histopathological abnormalities. Severity of MRI-visible PVS during life was significantly associated with severity of MRI-visible PVS on ex vivo MRI in formalin fixed intact hemispheres and corresponded with PVS enlargement on histopathology in the same areas. EPVS were located mainly around the white matter portion of perforating cortical arterioles and their burden was associated with CAA severity in the overlying cortex. Furthermore, we observed markedly reduced smooth muscle cells and increased vascular Aβ accumulation, extending into the WM, in individually affected vessels with an EPVS. Overall, these findings are consistent with the notion that EPVS reflect impaired outward flow along arterioles and have implications for our understanding of perivascular clearance mechanisms, which play an important role in the pathophysiology of CAA and AD.
Collapse
Affiliation(s)
- Valentina Perosa
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, J. Philip Kistler Stroke Research Center, Cambridge Str. 175, Suite 300, Boston, MA, 02114, USA.
- Department of Neurology, Otto-Von-Guericke University, Magdeburg, Germany.
- German Center for Neurodegenerative Diseases (DZNE), Magdeburg, Germany.
| | - Jan Oltmer
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
| | - Leon P Munting
- Massachusetts General Hospital, MassGeneral Institute for Neurodegenerative Disease, Charlestown, MA, USA
- Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Whitney M Freeze
- Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
- Department of Neuropsychology and Psychiatry, Maastricht University, Maastricht, The Netherlands
| | - Corinne A Auger
- Massachusetts General Hospital, MassGeneral Institute for Neurodegenerative Disease, Charlestown, MA, USA
| | - Ashley A Scherlek
- Massachusetts General Hospital, MassGeneral Institute for Neurodegenerative Disease, Charlestown, MA, USA
- Rush Alzheimer Disease Center, Rush University Medical Center, Chicago, IL, USA
| | - Andre J van der Kouwe
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
| | - Juan Eugenio Iglesias
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Centre for Medical Image Computing, University College London, London, UK
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Alessia Atzeni
- Centre for Medical Image Computing, University College London, London, UK
| | - Brian J Bacskai
- Massachusetts General Hospital, MassGeneral Institute for Neurodegenerative Disease, Charlestown, MA, USA
| | - Anand Viswanathan
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, J. Philip Kistler Stroke Research Center, Cambridge Str. 175, Suite 300, Boston, MA, 02114, USA
| | - Matthew P Frosch
- Massachusetts General Hospital, MassGeneral Institute for Neurodegenerative Disease, Charlestown, MA, USA
- Neuropathology Service, C.S. Kubik Laboratory for Neuropathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Steven M Greenberg
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, J. Philip Kistler Stroke Research Center, Cambridge Str. 175, Suite 300, Boston, MA, 02114, USA
| | - Susanne J van Veluw
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, J. Philip Kistler Stroke Research Center, Cambridge Str. 175, Suite 300, Boston, MA, 02114, USA
- Massachusetts General Hospital, MassGeneral Institute for Neurodegenerative Disease, Charlestown, MA, USA
- Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| |
Collapse
|
82
|
Khandelwal P, Zimmerman CE, Xie L, Lee H, Song HK, Yushkevich PA, Vossough A, Bartlett SP, Wehrli FW. Automatic Segmentation of Bone Selective MR Images for Visualization and Craniometry of the Cranial Vault. Acad Radiol 2022; 29 Suppl 3:S98-S106. [PMID: 33903011 PMCID: PMC8536795 DOI: 10.1016/j.acra.2021.03.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 03/11/2021] [Accepted: 03/11/2021] [Indexed: 11/24/2022]
Abstract
RATIONALE AND OBJECTIVES Solid-state MRI has been shown to provide a radiation-free alternative imaging strategy to CT. However, manual image segmentation to produce bone-selective MR-based 3D renderings is time and labor intensive, thereby acting as a bottleneck in clinical practice. The objective of this study was to evaluate an automatic multi-atlas segmentation pipeline for use on cranial vault images entirely circumventing prior manual intervention, and to assess concordance of craniometric measurements between pipeline produced MRI and CT-based 3D skull renderings. MATERIALS AND METHODS Dual-RF, dual-echo, 3D UTE pulse sequence MR data were obtained at 3T on 30 healthy subjects along with low-dose CT images between December 2018 to January 2020 for this prospective study. The four-point MRI datasets (two RF pulse widths and two echo times) were combined to produce bone-specific images. CT images were thresholded and manually corrected to segment the cranial vault. CT images were then rigidly registered to MRI using mutual information. The corresponding cranial vault segmentations were then transformed to MRI. The "ground truth" segmentations served as reference for the MR images. Subsequently, an automated multi-atlas pipeline was used to segment the bone-selective images. To compare manually and automatically segmented MR images, the Dice similarity coefficient (DSC) and Hausdorff distance (HD) were computed, and craniometric measurements between CT and automated-pipeline MRI-based segmentations were examined via Lin's concordance coefficient (LCC). RESULTS Automated segmentation reduced the need for an expert to obtain segmentation. Average DSC was 90.86 ± 1.94%, and average 95th percentile HD was 1.65 ± 0.44 mm between ground truth and automated segmentations. MR-based measurements differed from CT-based measurements by 0.73-1.2 mm on key craniometric measurements. LCC for distances between CT and MR-based landmarks were vertex-basion: 0.906, left-right frontozygomatic suture: 0.780, and glabella-opisthocranium: 0.956 for the three measurements. CONCLUSION Good agreement between CT and automated MR-based 3D cranial vault renderings has been achieved, thereby eliminating the laborious manual segmentation process. Target applications comprise craniofacial surgery as well as imaging of traumatic injuries and masses involving both bone and soft tissue.
Collapse
Affiliation(s)
- Pulkit Khandelwal
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA,Penn Image Computing and Science Laboratory, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Carrie E. Zimmerman
- Division of Plastic and Reconstructive Surgery, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Long Xie
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA,Penn Image Computing and Science Laboratory, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Hyunyeol Lee
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA,Laboratory for Structural, Physiologic and Functional Imaging, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Hee Kwon Song
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA,Laboratory for Structural, Physiologic and Functional Imaging, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Paul A. Yushkevich
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA,Penn Image Computing and Science Laboratory, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Arastoo Vossough
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA,Children’s Hospital of Philadelphia, Department of Radiology, Philadelphia, PA, USA
| | - Scott P. Bartlett
- Division of Plastic and Reconstructive Surgery, Children’s Hospital of Philadelphia, Philadelphia, PA, USA,Department of Surgery, University of Pennsylvania, Philadelphia, PA USA
| | - Felix W. Wehrli
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA,Laboratory for Structural, Physiologic and Functional Imaging, Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA,Corresponding Author: University of Pennsylvania, Department of Radiology, MRI Education Center, 1 Founders Building, 3400 Spruce Street, Philadelphia, PA 19104-4283,
| |
Collapse
|
83
|
De Feo R, Hämäläinen E, Manninen E, Immonen R, Valverde JM, Ndode-Ekane XE, Gröhn O, Pitkänen A, Tohka J. Convolutional Neural Networks Enable Robust Automatic Segmentation of the Rat Hippocampus in MRI After Traumatic Brain Injury. Front Neurol 2022; 13:820267. [PMID: 35250823 PMCID: PMC8891699 DOI: 10.3389/fneur.2022.820267] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 01/24/2022] [Indexed: 11/13/2022] Open
Abstract
Registration-based methods are commonly used in the automatic segmentation of magnetic resonance (MR) brain images. However, these methods are not robust to the presence of gross pathologies that can alter the brain anatomy and affect the alignment of the atlas image with the target image. In this work, we develop a robust algorithm, MU-Net-R, for automatic segmentation of the normal and injured rat hippocampus based on an ensemble of U-net-like Convolutional Neural Networks (CNNs). MU-Net-R was trained on manually segmented MR images of sham-operated rats and rats with traumatic brain injury (TBI) by lateral fluid percussion. The performance of MU-Net-R was quantitatively compared with methods based on single and multi-atlas registration using MR images from two large preclinical cohorts. Automatic segmentations using MU-Net-R and multi-atlas registration were of excellent quality, achieving cross-validated Dice scores above 0.90 despite the presence of brain lesions, atrophy, and ventricular enlargement. In contrast, the performance of single-atlas segmentation was unsatisfactory (cross-validated Dice scores below 0.85). Interestingly, the registration-based methods were better at segmenting the contralateral than the ipsilateral hippocampus, whereas MU-Net-R segmented the contralateral and ipsilateral hippocampus equally well. We assessed the progression of hippocampal damage after TBI by using our automatic segmentation tool. Our data show that the presence of TBI, time after TBI, and whether the hippocampus was ipsilateral or contralateral to the injury were the parameters that explained hippocampal volume.
Collapse
Affiliation(s)
- Riccardo De Feo
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
- SAIMLAL Department (Human Anatomy, Histology, Forensic Medicine and Orthopedics), Sapienza Università di Roma, Rome, Italy
- *Correspondence: Riccardo De Feo
| | - Elina Hämäläinen
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | - Eppu Manninen
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | - Riikka Immonen
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | - Juan Miguel Valverde
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | | | - Olli Gröhn
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | - Asla Pitkänen
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| | - Jussi Tohka
- A. I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, Kuopio, Finland
| |
Collapse
|
84
|
Ding W, Li L, Zhuang X, Huang L. Cross-Modality Multi-Atlas Segmentation Using Deep Neural Networks. IEEE J Biomed Health Inform 2022; 26:3104-3115. [PMID: 35130178 DOI: 10.1109/jbhi.2022.3149114] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation. Generally, MAS methods register multiple atlases, i.e., medical images with corresponding labels, to a target image; and the transformed atlas labels can be combined to generate target segmentation via label fusion schemes. Many conventional MAS methods employed the atlases from the same modality as the target image. However, the number of atlases with the same modality may be limited or even missing in many clinical applications. Besides, conventional MAS methods suffer from the computational burden of registration or label fusion procedures. In this work, we design a novel cross-modality MAS framework, which uses available atlases from a certain modality to segment a target image from another modality. To boost the computational efficiency of the framework, both the image registration and label fusion are achieved by well-designed deep neural networks. For the atlas-to-target image registration, we propose a bi-directional registration network (BiRegNet), which can efficiently align images from different modalities. For the label fusion, we design a similarity estimation network (SimNet), which estimates the fusion weight of each atlas by measuring its similarity to the target image. SimNet can learn multi-scale information for similarity estimation to improve the performance of label fusion. The proposed framework was evaluated by the left ventricle and liver segmentation tasks on the MM-WHS and CHAOS datasets, respectively. Results have shown that the framework is effective for cross-modality MAS in both registration and label fusion. The code will be released publicly on https://github.com/NanYoMy/cmmas once the manuscript is accepted.
Collapse
|
85
|
Triay Bagur A, Aljabar P, Ridgway GR, Brady M, Bulte DP. Pancreas MRI Segmentation Into Head, Body, and Tail Enables Regional Quantitative Analysis of Heterogeneous Disease. J Magn Reson Imaging 2022; 56:997-1008. [PMID: 35128748 DOI: 10.1002/jmri.28098] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 01/19/2022] [Accepted: 01/22/2022] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND Quantitative imaging studies of the pancreas have often targeted the three main anatomical segments, head, body, and tail, using manual region of interest strategies to assess geographic heterogeneity. Existing automated analyses have implemented whole-organ segmentation, providing overall quantification but failing to address spatial heterogeneity. PURPOSE To develop and validate an automated method for pancreas segmentation into head, body, and tail subregions in abdominal MRI. STUDY TYPE Retrospective. SUBJECTS One hundred and fifty nominally healthy subjects from UK Biobank (100 subjects for method development and 50 subjects for validation). A separate 390 UK Biobank triples of subjects including type 2 diabetes mellitus (T2DM) subjects and matched nondiabetics. FIELD STRENGTH/SEQUENCE A 1.5 T, three-dimensional two-point Dixon sequence (for segmentation and volume assessment) and a two-dimensional axial multiecho gradient-recalled echo sequence. ASSESSMENT Pancreas segments were annotated by four raters on the validation cohort. Intrarater agreement and interrater agreement were reported using Dice overlap (Dice similarity coefficient [DSC]). A segmentation method based on template registration was developed and evaluated against annotations. Results on regional pancreatic fat assessment are also presented, by intersecting the three-dimensional parts segmentation with one available proton density fat fraction (PDFF) image. STATISTICAL TEST Wilcoxon signed rank test and Mann-Whitney U-test for comparisons. DSC and volume differences for evaluation. A P value < 0.05 was considered statistically significant. RESULTS Good intrarater (DSC mean, head: 0.982, body: 0.940, tail: 0.961) agreement and interrater (DSC mean, head: 0.968, body: 0.905, tail: 0.943) agreement were observed. No differences (DSC, head: P = 0.4358, body: P = 0.0992, tail: P = 0.1080) were observed between the manual annotations and our method's segmentations (DSC mean, head: 0.965, body: 0.893, tail: 0.934). Pancreatic body PDFF was different between T2DM and nondiabetics matched by body mass index. DATA CONCLUSION The developed segmentation's performance was no different from manual annotations. Application on type 2 diabetes subjects showed potential for assessing pancreatic disease heterogeneity. LEVEL OF EVIDENCE 4 TECHNICAL EFFICACY STAGE: 3.
Collapse
Affiliation(s)
- Alexandre Triay Bagur
- Department of Engineering Science, University of Oxford, Oxford, UK.,Perspectum Ltd, Oxford, UK
| | | | | | | | - Daniel P Bulte
- Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
86
|
Malimban J, Lathouwers D, Qian H, Verhaegen F, Wiedemann J, Brandenburg S, Staring M. Deep learning-based segmentation of the thorax in mouse micro-CT scans. Sci Rep 2022; 12:1822. [PMID: 35110676 PMCID: PMC8810936 DOI: 10.1038/s41598-022-05868-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Accepted: 01/18/2022] [Indexed: 12/18/2022] Open
Abstract
For image-guided small animal irradiations, the whole workflow of imaging, organ contouring, irradiation planning, and delivery is typically performed in a single session requiring continuous administration of anaesthetic agents. Automating contouring leads to a faster workflow, which limits exposure to anaesthesia and thereby, reducing its impact on experimental results and on animal wellbeing. Here, we trained the 2D and 3D U-Net architectures of no-new-Net (nnU-Net) for autocontouring of the thorax in mouse micro-CT images. We trained the models only on native CTs and evaluated their performance using an independent testing dataset (i.e., native CTs not included in the training and validation). Unlike previous studies, we also tested the model performance on an external dataset (i.e., contrast-enhanced CTs) to see how well they predict on CTs completely different from what they were trained on. We also assessed the interobserver variability using the generalized conformity index ([Formula: see text]) among three observers, providing a stronger human baseline for evaluating automated contours than previous studies. Lastly, we showed the benefit on the contouring time compared to manual contouring. The results show that 3D models of nnU-Net achieve superior segmentation accuracy and are more robust to unseen data than 2D models. For all target organs, the mean surface distance (MSD) and the Hausdorff distance (95p HD) of the best performing model for this task (nnU-Net 3d_fullres) are within 0.16 mm and 0.60 mm, respectively. These values are below the minimum required contouring accuracy of 1 mm for small animal irradiations, and improve significantly upon state-of-the-art 2D U-Net-based AIMOS method. Moreover, the conformity indices of the 3d_fullres model also compare favourably to the interobserver variability for all target organs, whereas the 2D models perform poorly in this regard. Importantly, the 3d_fullres model offers 98% reduction in contouring time.
Collapse
Affiliation(s)
- Justin Malimban
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, 9700 RB, Groningen, The Netherlands.
| | - Danny Lathouwers
- Department of Radiation Science and Technology, Faculty of Applied Sciences, Delft University of Technology, 2629 JB, Delft, The Netherlands
| | - Haibin Qian
- Department of Medical Biology, Amsterdam University Medical Centers (Location AMC) and Cancer Center Amsterdam, 1105 AZ, Amsterdam, The Netherlands
| | - Frank Verhaegen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, 6229 ER, Maastricht, The Netherlands
| | - Julia Wiedemann
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, 9700 RB, Groningen, The Netherlands
- Department of Biomedical Sciences of Cells and Systems-Section Molecular Cell Biology, University Medical Center Groningen, University of Groningen, 9700 RB, Groningen, The Netherlands
| | - Sytze Brandenburg
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, 9700 RB, Groningen, The Netherlands
| | - Marius Staring
- Department of Radiology, Leiden University Medical Center, 2333 ZA, Leiden, The Netherlands
| |
Collapse
|
87
|
Harrison K, Pullen H, Welsh C, Oktay O, Alvarez-Valle J, Jena R. Machine Learning for Auto-Segmentation in Radiotherapy Planning. Clin Oncol (R Coll Radiol) 2022; 34:74-88. [PMID: 34996682 DOI: 10.1016/j.clon.2021.12.003] [Citation(s) in RCA: 51] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 11/27/2021] [Accepted: 12/03/2021] [Indexed: 12/12/2022]
Abstract
Manual segmentation of target structures and organs at risk is a crucial step in the radiotherapy workflow. It has the disadvantages that it can require several hours of clinician time per patient and is prone to inter- and intra-observer variability. Automatic segmentation (auto-segmentation), using computer algorithms, seeks to address these issues. Advances in machine learning and computer vision have led to the development of methods for accurate and efficient auto-segmentation. This review surveys auto-segmentation techniques and applications in radiotherapy planning. It provides an overview of traditional approaches to auto-segmentation, including intensity analysis, shape modelling and atlas-based methods. The focus, though, is on uses of machine learning and deep learning, including convolutional neural networks. Finally, the future of machine-learning-driven auto-segmentation in clinical settings is considered, and the barriers that must be overcome for it to be widely accepted into routine practice are highlighted.
Collapse
Affiliation(s)
- K Harrison
- Cavendish Laboratory, University of Cambridge, Cambridge, UK.
| | - H Pullen
- Cavendish Laboratory, University of Cambridge, Cambridge, UK
| | - C Welsh
- Department of Oncology, University of Cambridge, Cambridge, UK
| | - O Oktay
- Health Intelligence, Microsoft Research, Cambridge, UK
| | | | - R Jena
- Department of Oncology, University of Cambridge, Cambridge, UK; Department of Oncology, Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| |
Collapse
|
88
|
Atlas-ISTN: Joint Segmentation, Registration and Atlas Construction with Image-and-Spatial Transformer Networks. Med Image Anal 2022; 78:102383. [DOI: 10.1016/j.media.2022.102383] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 11/24/2021] [Accepted: 02/01/2022] [Indexed: 11/16/2022]
|
89
|
Casati M, Piffer S, Calusi S, Marrazzo L, Simontacchi G, Di Cataldo V, Greto D, Desideri I, Vernaleone M, Francolini G, Livi L, Pallotta S. Clinical validation of an automatic atlas‐based segmentation tool for male pelvis CT images. J Appl Clin Med Phys 2022; 23:e13507. [PMID: 35064746 PMCID: PMC8906199 DOI: 10.1002/acm2.13507] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 12/01/2021] [Accepted: 12/06/2021] [Indexed: 12/20/2022] Open
Abstract
Purpose This retrospective work aims to evaluate the possible impact on intra‐ and inter‐observer variability, contouring time, and contour accuracy of introducing a pelvis computed tomography (CT) auto‐segmentation tool in radiotherapy planning workflow. Methods Tests were carried out on five structures (bladder, rectum, pelvic lymph‐nodes, and femoral heads) of six previously treated subjects, enrolling five radiation oncologists (ROs) to manually re‐contour and edit auto‐contours generated with a male pelvis CT atlas created with the commercial software MIM MAESTRO. The ROs first delineated manual contours (M). Then they modified the auto‐contours, producing automatic‐modified (AM) contours. The procedure was repeated to evaluate intra‐observer variability, producing M1, M2, AM1, and AM2 contour sets (each comprising 5 structures × 6 test patients × 5 ROs = 150 contours), for a total of 600 contours. Potential time savings was evaluated by comparing contouring and editing times. Structure contours were compared to a reference standard by means of Dice similarity coefficient (DSC) and mean distance to agreement (MDA), to assess intra‐ and inter‐observer variability. To exclude any automation bias, ROs evaluated both M and AM sets as “clinically acceptable” or “to be corrected” in a blind test. Results Comparing AM to M sets, a significant reduction of both inter‐observer variability (p < 0.001) and contouring time (‐45% whole pelvis, p < 0.001) was obtained. Intra‐observer variability reduction was significant only for bladder and femoral heads (p < 0.001). The statistical test showed no significant bias. Conclusion Our atlas‐based workflow proved to be effective for clinical practice as it can improve contour reproducibility and generate time savings. Based on these findings, institutions are encouraged to implement their auto‐segmentation method.
Collapse
Affiliation(s)
- Marta Casati
- Medical Physics Unit Careggi University Hospital Florence Italy
| | - Stefano Piffer
- Department of Experimental and Clinical Biomedical Sciences University of Florence Florence Italy
- National Institute of Nuclear Physics (INFN) Florence Italy
| | - Silvia Calusi
- Department of Experimental and Clinical Biomedical Sciences University of Florence Florence Italy
- National Institute of Nuclear Physics (INFN) Florence Italy
| | - Livia Marrazzo
- Medical Physics Unit Careggi University Hospital Florence Italy
| | | | | | - Daniela Greto
- Radiation Oncology Unit Careggi University Hospital Florence Italy
| | - Isacco Desideri
- Department of Experimental and Clinical Biomedical Sciences University of Florence Florence Italy
| | - Marco Vernaleone
- Radiation Oncology Unit Careggi University Hospital Florence Italy
| | | | - Lorenzo Livi
- Department of Experimental and Clinical Biomedical Sciences University of Florence Florence Italy
- Radiation Oncology Unit Careggi University Hospital Florence Italy
| | - Stefania Pallotta
- Medical Physics Unit Careggi University Hospital Florence Italy
- Department of Experimental and Clinical Biomedical Sciences University of Florence Florence Italy
| |
Collapse
|
90
|
Liu Y, Huo Y, Dewey B, Wei Y, Lyu I, Landman BA. Generalizing deep learning brain segmentation for skull removal and intracranial measurements. Magn Reson Imaging 2022; 88:44-52. [PMID: 34999162 DOI: 10.1016/j.mri.2022.01.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 12/28/2021] [Accepted: 01/04/2022] [Indexed: 10/19/2022]
Abstract
Total intracranial volume (TICV) and posterior fossa volume (PFV) are essential covariates for brain volumetric analyses with structural magnetic resonance imaging (MRI). Detailed whole brain segmentation provides a non-invasive way to measure brain regions. Furthermore, increasing neuroimaging data are distributed in a skull-stripped manner for privacy protection. Therefore, generalizing deep learning brain segmentation for skull removal and intracranial measurements is an appealing task. However, data availability is challenging due to a limited set of manually traced atlases with whole brain and TICV/PFV labels. In this paper, we employ U-Net tiles to achieve automatic TICV estimation and whole brain segmentation simultaneously on brains w/and w/o the skull. To overcome the scarcity of manually traced whole brain volumes, a transfer learning method is introduced to estimate additional TICV and PFV labels during whole brain segmentation in T1-weighted MRI. Specifically, U-Net tiles are first pre-trained using large-scale BrainCOLOR atlases without TICV and PFV labels, which are created by multi-atlas segmentation. Then the pre-trained models are refined by training the additional TICV and PFV labels using limited BrainCOLOR atlases. We also extend our method to handle skull-stripped brain MR images. From the results, our method provides promising whole brain segmentation and volume estimation results for both brains w/and w/o skull in terms of mean Dice similarity coefficients and mean surface distance and absolute volume similarity. This method has been made available in open source (https://github.com/MASILab/SLANTbrainSeg_skullstripped).
Collapse
Affiliation(s)
- Yue Liu
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China; Electrical Engineering and Computer Science, Vanderbilt University, TN, USA.
| | - Yuankai Huo
- Electrical Engineering and Computer Science, Vanderbilt University, TN, USA
| | - Blake Dewey
- Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| | - Ying Wei
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China
| | - Ilwoo Lyu
- Electrical Engineering and Computer Science, Vanderbilt University, TN, USA; Department of Computer Science and Engineering, UNIST, Ulsan 44919, South Korea
| | - Bennett A Landman
- Electrical Engineering and Computer Science, Vanderbilt University, TN, USA
| |
Collapse
|
91
|
Ottom MA, Rahman HA, Dinov ID. Znet: Deep Learning Approach for 2D MRI Brain Tumor Segmentation. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 10:1800508. [PMID: 35774412 PMCID: PMC9236306 DOI: 10.1109/jtehm.2022.3176737] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Revised: 05/12/2022] [Accepted: 05/16/2022] [Indexed: 11/22/2022]
Abstract
Background: Detection and segmentation of brain tumors using MR images are challenging and valuable tasks in the medical field. Early diagnosing and localizing of brain tumors can save lives and provide timely options for physicians to select efficient treatment plans. Deep learning approaches have attracted researchers in medical imaging due to their capacity, performance, and potential to assist in accurate diagnosis, prognosis, and medical treatment technologies. Methods and procedures: This paper presents a novel framework for segmenting 2D brain tumors in MR images using deep neural networks (DNN) and utilizing data augmentation strategies. The proposed approach (Znet) is based on the idea of skip-connection, encoder-decoder architectures, and data amplification to propagate the intrinsic affinities of a relatively smaller number of expert delineated tumors, e.g., hundreds of patients of the low-grade glioma (LGG), to many thousands of synthetic cases. Results: Our experimental results showed high values of the mean dice similarity coefficient (dice = 0.96 during model training and dice = 0.92 for the independent testing dataset). Other evaluation measures were also relatively high, e.g., pixel accuracy = 0.996, F1 score = 0.81, and Matthews Correlation Coefficient, MCC = 0.81. The results and visualization of the DNN-derived tumor masks in the testing dataset showcase the ZNet model’s capability to localize and auto-segment brain tumors in MR images. This approach can further be generalized to 3D brain volumes, other pathologies, and a wide range of image modalities. Conclusion: We can confirm the ability of deep learning methods and the proposed Znet framework to detect and segment tumors in MR images. Furthermore, pixel accuracy evaluation may not be a suitable evaluation measure for semantic segmentation in case of class imbalance in MR images segmentation. This is because the dominant class in ground truth images is the background. Therefore, a high value of pixel accuracy can be misleading in some computer vision applications. On the other hand, alternative evaluation metrics, such as dice and IoU (Intersection over Union), are more factual for semantic segmentation. Clinical impact: Artificial intelligence (AI) applications in medicine are advancing swiftly, however, there is a lack of deployed techniques in clinical practice. This research demonstrates a practical example of AI applications in medical imaging, which can be deployed as a tool for auto-segmentation of tumors in MR images.
Collapse
Affiliation(s)
| | - Hanif Abdul Rahman
- Departments of Health Behavior and Biological Sciences and Computational Medicine and Bioinformatics, Statistics Online Computational Resource, University of Michigan, Ann Arbor, MI, USA
| | - Ivo D. Dinov
- Departments of Health Behavior and Biological Sciences and Computational Medicine and Bioinformatics, Statistics Online Computational Resource, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
92
|
Brown DA, McMahan CS, Shinohara RT, Linn KA. Bayesian Spatial Binary Regression for Label Fusion in Structural Neuroimaging. J Am Stat Assoc 2022; 117:547-560. [PMID: 36338275 PMCID: PMC9632253 DOI: 10.1080/01621459.2021.2014854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Alzheimer's disease is a neurodegenerative condition that accelerates cognitive decline relative to normal aging. It is of critical scientific importance to gain a better understanding of early disease mechanisms in the brain to facilitate effective, targeted therapies. The volume of the hippocampus is often used in diagnosis and monitoring of the disease. Measuring this volume via neuroimaging is difficult since each hippocampus must either be manually identified or automatically delineated, a task referred to as segmentation. Automatic hippocampal segmentation often involves mapping a previously manually segmented image to a new brain image and propagating the labels to obtain an estimate of where each hippocampus is located in the new image. A more recent approach to this problem is to propagate labels from multiple manually segmented atlases and combine the results using a process known as label fusion. To date, most label fusion algorithms employ voting procedures with voting weights assigned directly or estimated via optimization. We propose using a fully Bayesian spatial regression model for label fusion that facilitates direct incorporation of covariate information while making accessible the entire posterior distribution. Our results suggest that incorporating tissue classification (e.g, gray matter) into the label fusion procedure can greatly improve segmentation when relatively homogeneous, healthy brains are used as atlases for diseased brains. The fully Bayesian approach also produces meaningful uncertainty measures about hippocampal volumes, information which can be leveraged to detect significant, scientifically meaningful differences between healthy and diseased populations, improving the potential for early detection and tracking of the disease.
Collapse
Affiliation(s)
- D. Andrew Brown
- School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC 29634, USA
| | - Christopher S. McMahan
- School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC 29634, USA
| | - Russell T. Shinohara
- Penn Statistics in Imaging and Visualization Center, Department of Biostatistics, Epidemiology, and Informatics, and Center for Biomedical Image Computing and Analytics, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Kristin A. Linn
- Penn Statistics in Imaging and Visualization Center, Department of Biostatistics, Epidemiology, and Informatics, and Center for Biomedical Image Computing and Analytics, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | | |
Collapse
|
93
|
Wu G, Chen X, Shi Z, Zhang D, Hu Z, Mao Y, Wang Y, Yu J. Convolutional neural network with coarse-to-fine resolution fusion and residual learning structures for cross-modality image synthesis. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
94
|
Beekman C, van Beek S, Stam J, Sonke JJ, Remeijer P. Improving predictive CTV segmentation on CT and CBCT for cervical cancer by diffeomorphic registration of a prior. Med Phys 2021; 49:1701-1711. [PMID: 34964986 DOI: 10.1002/mp.15421] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 11/14/2021] [Accepted: 11/26/2021] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Automatic cervix-uterus segmentation of the clinical target volume (CTV) on CT and cone beam CT (CBCT) scans is challenged by the limited visibility and the non-anatomical definition of certain border regions. We study potential performance gain of convolutional neural networks by regulating the segmentation predictions as diffeomorphic deformations of a segmentation prior. METHODS We introduce a 3D convolutional neural network (CNN) which segments the target scan by joint voxel-wise classification and the registration of a given prior. We compare this network to two other 3D baseline models: one treating segmentation as a classification problem (segmentation-only), the other as a registration problem (deformation-only). For reference and to highlight benefits of a 3D model, these models are also benchmarked against a 2D segmentation model. Network performances are reported for CT and CBCT segmentation of the cervix-uterus CTV. We train the networks on data of 84 patients. The prior is provided by the CTV segmentation of a planning CT. Repeat CT or CBCT scans constitute the target scans to be segmented. RESULTS All 3D models outperformed the 2D segmentation model. For CT segmentation, combining classification and registration in the proposed joint model proved beneficial, achieving a Dice score of 0.87 and a mean squared error (MSE) of the surface distance below 1.7 mm. No such synergy was observed for CBCT segmentation, for which the joint and the deformation-only model performed similarly, achieving a Dice score of about 0.80 and a MSE surface distance of 2.5 mm. However, the segmentation-only model performed notably worse in this low contrast regime. Visual inspection revealed that this performance drop translated into geometric inconsistencies between the prior and target segmentation. Such inconsistencies where not observed for the deformation-based models. CONCLUSION Constraining the solution space of admissible segmentation predictions to those reachable by a diffeomorphic deformation of the prior proved beneficial as it improved geometric consistency. Especially for CBCT, with its poor soft tissue contrast, this type of regularization becomes important as shown by quantitative and qualitative evaluation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Chris Beekman
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Suzanne van Beek
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Jikke Stam
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Jan-Jakob Sonke
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Peter Remeijer
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| |
Collapse
|
95
|
Jiang D, Wang Y, Zhou F, Ma H, Zhang W, Fang W, Zhao P, Tong Z. Residual refinement for interactive skin lesion segmentation. J Biomed Semantics 2021; 12:22. [PMID: 34922629 PMCID: PMC8684232 DOI: 10.1186/s13326-021-00255-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Accepted: 11/11/2021] [Indexed: 11/29/2022] Open
Abstract
BACKGROUND Image segmentation is a difficult and classic problem. It has a wide range of applications, one of which is skin lesion segmentation. Numerous researchers have made great efforts to tackle the problem, yet there is still no universal method in various application domains. RESULTS We propose a novel approach that combines a deep convolutional neural network with a grabcut-like user interaction to tackle the interactive skin lesion segmentation problem. Slightly deviating from grabcut user interaction, our method uses boxes and clicks. In addition, contrary to existing interactive segmentation algorithms that combine the initial segmentation task with the following refinement task, we explicitly separate these tasks by designing individual sub-networks. One network is SBox-Net, and the other is Click-Net. SBox-Net is a full-fledged segmentation network that is built upon a pre-trained, state-of-the-art segmentation model, while Click-Net is a simple yet powerful network that combines feature maps extracted from SBox-Net and user clicks to residually refine the mistakes made by SBox-Net. Extensive experiments on two public datasets, PH2 and ISIC, confirm the effectiveness of our approach. CONCLUSIONS We present an interactive two-stage pipeline method for skin lesion segmentation, which was demonstrated to be effective in comprehensive experiments.
Collapse
Affiliation(s)
- Dalei Jiang
- Echocardiography and Vascular Ultrasound Center, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, CN, China
| | - Yin Wang
- College of Computer Science and Technology, Zhejiang University, Hangzhou, CN, China
| | - Feng Zhou
- Department of Urology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, CN, China
| | - Hongtao Ma
- College of Computer Science and Technology, Zhejiang University, Hangzhou, CN, China
| | - Wenting Zhang
- Echocardiography and Vascular Ultrasound Center, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, CN, China
| | - Weijia Fang
- Department of Medical Oncology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, CN, China
| | - Peng Zhao
- Department of Medical Oncology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, CN, China.
| | - Zhou Tong
- Department of Medical Oncology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, CN, China.
| |
Collapse
|
96
|
Multi-COVID-Net: Multi-objective optimized network for COVID-19 diagnosis from chest X-ray images. Appl Soft Comput 2021; 115:108250. [PMID: 34903956 PMCID: PMC8656152 DOI: 10.1016/j.asoc.2021.108250] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Revised: 10/12/2021] [Accepted: 11/23/2021] [Indexed: 12/24/2022]
Abstract
Coronavirus Disease 2019 (COVID-19) had already spread worldwide, and healthcare services have become limited in many countries. Efficient screening of hospitalized individuals is vital in the struggle toward COVID-19 through chest radiography, which is one of the important assessment strategies. This allows researchers to understand medical information in terms of chest X-ray (CXR) images and evaluate relevant irregularities, which may result in a fully automated identification of the disease. Due to the rapid growth of cases every day, a relatively small number of COVID-19 testing kits are readily accessible in health care facilities. Thus it is imperative to define a fully automated detection method as an instant alternate treatment possibility to limit the occurrence of COVID-19 among individuals. In this paper, a two-step Deep learning (DL) architecture has been proposed for COVID-19 diagnosis using CXR. The proposed DL architecture consists of two stages, “feature extraction and classification”. The “Multi-Objective Grasshopper Optimization Algorithm (MOGOA)” is presented to optimize the DL network layers; hence, these networks have named as “Multi-COVID-Net”. This model classifies the Non-COVID-19, COVID-19, and pneumonia patient images automatically. The Multi-COVID-Net has been tested by utilizing the publicly available datasets, and this model provides the best performance results than other state-of-the-art methods.
Collapse
|
97
|
Aletti G, Benfenati A, Naldi G. A Semi-Supervised Reduced-Space Method for Hyperspectral Imaging Segmentation. J Imaging 2021; 7:jimaging7120267. [PMID: 34940734 PMCID: PMC8706750 DOI: 10.3390/jimaging7120267] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 11/26/2021] [Accepted: 12/01/2021] [Indexed: 11/19/2022] Open
Abstract
The development of the hyperspectral remote sensor technology allows the acquisition of images with a very detailed spectral information for each pixel. Because of this, hyperspectral images (HSI) potentially possess larger capabilities in solving many scientific and practical problems in agriculture, biomedical, ecological, geological, hydrological studies. However, their analysis requires developing specialized and fast algorithms for data processing, due the high dimensionality of the data. In this work, we propose a new semi-supervised method for multilabel segmentation of HSI that combines a suitable linear discriminant analysis, a similarity index to compare different spectra, and a random walk based model with a direct label assignment. The user-marked regions are used for the projection of the original high-dimensional feature space to a lower dimensional space, such that the class separation is maximized. This allows to retain in an automatic way the most informative features, lightening the successive computational burden. The part of the random walk is related to a combinatorial Dirichlet problem involving a weighted graph, where the nodes are the projected pixel of the original HSI, and the positive weights depend on the distances between these nodes. We then assign to each pixel of the original image a probability quantifying the likelihood that the pixel (node) belongs to some subregion. The computation of the spectral distance involves both the coordinates in a features space of a pixel and of its neighbors. The final segmentation process is therefore reduced to a suitable optimization problem coupling the probabilities from the random walker computation, and the similarity with respect the initially labeled pixels. We discuss the properties of the new method with experimental results carried on benchmark images.
Collapse
|
98
|
Li J, Udupa JK, Odhner D, Tong Y, Torigian DA. SOMA: Subject-, object-, and modality-adapted precision atlas approach for automatic anatomy recognition and delineation in medical images. Med Phys 2021; 48:7806-7825. [PMID: 34668207 PMCID: PMC8678400 DOI: 10.1002/mp.15308] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 09/12/2021] [Accepted: 09/29/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE In the multi-atlas segmentation (MAS) method, a large enough atlas set, which can cover the complete spectrum of the whole population pattern of the target object will benefit the segmentation quality. However, the difficulty in obtaining and generating such a large set of atlases and the computational burden required in the segmentation procedure make this approach impractical. In this paper, we propose a method called SOMA to select subject-, object-, and modality-adapted precision atlases for automatic anatomy recognition in medical images with pathology, following the idea that different regions of the target object in a novel image can be recognized by different atlases with regionally best similarity, so that effective atlases have no need to be globally similar to the target subject and also have no need to be overall similar to the target object. METHODS The SOMA method consists of three main components: atlas building, object recognition, and object delineation. Considering the computational complexity, we utilize an all-to-template strategy to align all images to the same image space belonging to the root image determined by the minimum spanning tree (MST) strategy among a subset of radiologically near-normal images. The object recognition process is composed of two stages: rough recognition and refined recognition. In rough recognition, subimage matching is conducted between the test image and each image of the whole atlas set, and only the atlas corresponding to the best-matched subimage contributes to the recognition map regionally. The frequency of best match for each atlas is recorded by a counter, and the atlases with the highest frequencies are selected as the precision atlases. In refined recognition, only the precision atlases are examined, and the subimage matching is conducted in a nonlocal manner of searching to further increase the accuracy of boundary matching. Delineation is based on a U-net-based deep learning network, where the original gray scale image together with the fuzzy map from refined recognition compose a two-channel input to the network, and the output is a segmentation map of the target object. RESULTS Experiments are conducted on computed tomography (CT) images with different qualities in two body regions - head and neck (H&N) and thorax, from 298 subjects with nine objects and 241 subjects with six objects, respectively. Most objects achieve a localization error within two voxels after refined recognition, with marked improvement in localization accuracy from rough to refined recognition of 0.6-3 mm in H&N and 0.8-4.9 mm in thorax, and also in delineation accuracy (Dice coefficient) from refined recognition to delineation of 0.01-0.11 in H&N and 0.01-0.18 in thorax. CONCLUSIONS The SOMA method shows high accuracy and robustness in anatomy recognition and delineation. The improvements from rough to refined recognition and further to delineation, as well as immunity of recognition accuracy to varying image and object qualities, demonstrate the core principles of SOMA where segmentation accuracy increases with precision atlases and gradually refined object matching.
Collapse
Affiliation(s)
- Jieyu Li
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Jayaram K. Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Dewey Odhner
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Drew A. Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
99
|
Pla-Alemany S, Romero JA, Santabarbara JM, Aliaga R, Maceira AM, Moratal D. Automatic Multi-Atlas Liver Segmentation and Couinaud Classification from CT Volumes. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2826-2829. [PMID: 34891836 DOI: 10.1109/embc46164.2021.9630668] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Primary Live Cancer (PLC) is the sixth most common cancer worldwide and its occurrence predominates in patients with chronic liver diseases and other risk factors like hepatitis B and C. Treatment of PLC and malignant liver tumors depend both in tumor characteristics and the functional status of the organ, thus must be individualized for each patient. Liver segmentation and classification according to Couinaud's classification is essential for computer-aided diagnosis and treatment planning, however, manual segmentation of the liver volume slice by slice can be a time-consuming and challenging task and it is highly dependent on the experience of the user. We propose an alternative automatic segmentation method that allows accuracy and time consumption amelioration. The procedure pursues a multi-atlas based classification for Couinaud segmentation. Our algorithm was implemented on 20 subjects from the IRCAD 3D data base in order to segment and classify the liver volume in its Couinaud segments, obtaining an average DICE coefficient of 0.94.Clinical Relevance- The final purpose of this work is to provide an automatic multi-atlas liver segmentation and Couinaud classification by means of CT image analysis.
Collapse
|
100
|
Kalantar R, Lin G, Winfield JM, Messiou C, Lalondrelle S, Blackledge MD, Koh DM. Automatic Segmentation of Pelvic Cancers Using Deep Learning: State-of-the-Art Approaches and Challenges. Diagnostics (Basel) 2021; 11:1964. [PMID: 34829310 PMCID: PMC8625809 DOI: 10.3390/diagnostics11111964] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 10/14/2021] [Accepted: 10/19/2021] [Indexed: 12/18/2022] Open
Abstract
The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.
Collapse
Affiliation(s)
- Reza Kalantar
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan 333, Taiwan;
| | - Jessica M. Winfield
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Christina Messiou
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Susan Lalondrelle
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Matthew D. Blackledge
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Dow-Mu Koh
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| |
Collapse
|