1
|
Zhong T, Wang Y, Xu X, Wu X, Liang S, Ning Z, Wang L, Niu Y, Li G, Zhang Y. A brain subcortical segmentation tool based on anatomy attentional fusion network for developing macaques. Comput Med Imaging Graph 2024; 116:102404. [PMID: 38870599 DOI: 10.1016/j.compmedimag.2024.102404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 05/21/2024] [Accepted: 05/22/2024] [Indexed: 06/15/2024]
Abstract
Magnetic Resonance Imaging (MRI) plays a pivotal role in the accurate measurement of brain subcortical structures in macaques, which is crucial for unraveling the complexities of brain structure and function, thereby enhancing our understanding of neurodegenerative diseases and brain development. However, due to significant differences in brain size, structure, and imaging characteristics between humans and macaques, computational tools developed for human neuroimaging studies often encounter obstacles when applied to macaques. In this context, we propose an Anatomy Attentional Fusion Network (AAF-Net), which integrates multimodal MRI data with anatomical constraints in a multi-scale framework to address the challenges posed by the dynamic development, regional heterogeneity, and age-related size variations of the juvenile macaque brain, thus achieving precise subcortical segmentation. Specifically, we generate a Signed Distance Map (SDM) based on the initial rough segmentation of the subcortical region by a network as an anatomical constraint, providing comprehensive information on positions, structures, and morphology. Then we construct AAF-Net to fully fuse the SDM anatomical constraints and multimodal images for refined segmentation. To thoroughly evaluate the performance of our proposed tool, over 700 macaque MRIs from 19 datasets were used in this study. Specifically, we employed two manually labeled longitudinal macaque datasets to develop the tool and complete four-fold cross-validations. Furthermore, we incorporated various external datasets to demonstrate the proposed tool's generalization capabilities and promise in brain development research. We have made this tool available as an open-source resource at https://github.com/TaoZhong11/Macaque_subcortical_segmentation for direct application.
Collapse
Affiliation(s)
- Tao Zhong
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, China
| | - Ya Wang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, USA
| | - Xiaotong Xu
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, China
| | - Xueyang Wu
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, China
| | - Shujun Liang
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, China
| | - Zhenyuan Ning
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, China
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, USA
| | - Yuyu Niu
- Yunnan Key Laboratory of Primate Biomedical Research, Institute of Primate Translational Medicine, Kunming University of Science and Technology, China
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, USA.
| | - Yu Zhang
- School of Biomedical Engineering, Guangdong Provincial Key Laboratory of Medical Image Processing and Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, China.
| |
Collapse
|
2
|
Li Y, Xie L, Khandelwal P, Wisse LEM, Brown CA, Prabhakaran K, Dylan Tisdall M, Mechanic-Hamilton D, Detre JA, Das SR, Wolk DA, Yushkevich PA. Automatic segmentation of medial temporal lobe subregions in multi-scanner, multi-modality MRI of variable quality. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.21.595190. [PMID: 38826413 PMCID: PMC11142184 DOI: 10.1101/2024.05.21.595190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
Background Volumetry of subregions in the medial temporal lobe (MTL) computed from automatic segmentation in MRI can track neurodegeneration in Alzheimer's disease. However, image quality may vary in MRI. Poor quality MR images can lead to unreliable segmentation of MTL subregions. Considering that different MRI contrast mechanisms and field strengths (jointly referred to as "modalities" here) offer distinct advantages in imaging different parts of the MTL, we developed a muti-modality segmentation model using both 7 tesla (7T) and 3 tesla (3T) structural MRI to obtain robust segmentation in poor-quality images. Method MRI modalities including 3T T1-weighted, 3T T2-weighted, 7T T1-weighted and 7T T2-weighted (7T-T2w) of 197 participants were collected from a longitudinal aging study at the Penn Alzheimer's Disease Research Center. Among them, 7T-T2w was used as the primary modality, and all other modalities were rigidly registered to the 7T-T2w. A model derived from nnU-Net took these registered modalities as input and outputted subregion segmentation in 7T-T2w space. 7T-T2w images most of which had high quality from 25 selected training participants were manually segmented to train the multi-modality model. Modality augmentation, which randomly replaced certain modalities with Gaussian noise, was applied during training to guide the model to extract information from all modalities. To compare our proposed model with a baseline single-modality model in the full dataset with mixed high/poor image quality, we evaluated the ability of derived volume/thickness measures to discriminate Amyloid+ mild cognitive impairment (A+MCI) and Amyloid- cognitively unimpaired (A-CU) groups, as well as the stability of these measurements in longitudinal data. Results The multi-modality model delivered good performance regardless of 7T-T2w quality, while the single-modality model under-segmented subregions in poor-quality images. The multi-modality model generally demonstrated stronger discrimination of A+MCI versus A-CU. Intra-class correlation and Bland-Altman plots demonstrate that the multi-modality model had higher longitudinal segmentation consistency in all subregions while the single-modality model had low consistency in poor-quality images. Conclusion The multi-modality MRI segmentation model provides an improved biomarker for neurodegeneration in the MTL that is robust to image quality. It also provides a framework for other studies which may benefit from multimodal imaging.
Collapse
Affiliation(s)
- Yue Li
- Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, USA
- Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - Long Xie
- Department of Digital Technology and Innovation, Siemens Healthineers, Princeton, USA
| | - Pulkit Khandelwal
- Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, USA
| | - Laura E M Wisse
- Department of Diagnostic Radiology, Lund University, Lund, Sweden
| | | | | | - M Dylan Tisdall
- Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - Dawn Mechanic-Hamilton
- Department of Neurology, University of Pennsylvania, Philadelphia, USA
- Penn Memory Center, University of Pennsylvania, Philadelphia, USA
| | - John A Detre
- Department of Neurology, University of Pennsylvania, Philadelphia, USA
| | - Sandhitsu R Das
- Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, USA
- Department of Neurology, University of Pennsylvania, Philadelphia, USA
| | - David A Wolk
- Department of Neurology, University of Pennsylvania, Philadelphia, USA
- Penn Memory Center, University of Pennsylvania, Philadelphia, USA
| | - Paul A Yushkevich
- Penn Image Computing and Science Laboratory, University of Pennsylvania, Philadelphia, USA
- Department of Radiology, University of Pennsylvania, Philadelphia, USA
| |
Collapse
|
3
|
Finnegan RN, Quinn A, Booth J, Belous G, Hardcastle N, Stewart M, Griffiths B, Carroll S, Thwaites DI. Cardiac substructure delineation in radiation therapy - A state-of-the-art review. J Med Imaging Radiat Oncol 2024. [PMID: 38757728 DOI: 10.1111/1754-9485.13668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 04/29/2024] [Indexed: 05/18/2024]
Abstract
Delineation of cardiac substructures is crucial for a better understanding of radiation-related cardiotoxicities and to facilitate accurate and precise cardiac dose calculation for developing and applying risk models. This review examines recent advancements in cardiac substructure delineation in the radiation therapy (RT) context, aiming to provide a comprehensive overview of the current level of knowledge, challenges and future directions in this evolving field. Imaging used for RT planning presents challenges in reliably visualising cardiac anatomy. Although cardiac atlases and contouring guidelines aid in standardisation and reduction of variability, significant uncertainties remain in defining cardiac anatomy. Coupled with the inherent complexity of the heart, this necessitates auto-contouring for consistent large-scale data analysis and improved efficiency in prospective applications. Auto-contouring models, developed primarily for breast and lung cancer RT, have demonstrated performance comparable to manual contouring, marking a significant milestone in the evolution of cardiac delineation practices. Nevertheless, several key concerns require further investigation. There is an unmet need for expanding cardiac auto-contouring models to encompass a broader range of cancer sites. A shift in focus is needed from ensuring accuracy to enhancing the robustness and accessibility of auto-contouring models. Addressing these challenges is paramount for the integration of cardiac substructure delineation and associated risk models into routine clinical practice, thereby improving the safety of RT for future cancer patients.
Collapse
Affiliation(s)
- Robert N Finnegan
- Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales, Australia
- Institute of Medical Physics, School of Physics, University of Sydney, Sydney, New South Wales, Australia
| | - Alexandra Quinn
- Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales, Australia
| | - Jeremy Booth
- Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales, Australia
- Institute of Medical Physics, School of Physics, University of Sydney, Sydney, New South Wales, Australia
| | - Gregg Belous
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Brisbane, Queensland, Australia
| | - Nicholas Hardcastle
- Department of Physical Sciences, Peter MacCallum Cancer Centre, Melbourne, Victoria, Australia
- Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia
| | - Maegan Stewart
- Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales, Australia
- School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia
| | - Brooke Griffiths
- Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales, Australia
| | - Susan Carroll
- Northern Sydney Cancer Centre, Royal North Shore Hospital, Sydney, New South Wales, Australia
- School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Sydney, New South Wales, Australia
| | - David I Thwaites
- Institute of Medical Physics, School of Physics, University of Sydney, Sydney, New South Wales, Australia
- Radiotherapy Research Group, Leeds Institute of Medical Research, St James's Hospital and University of Leeds, Leeds, UK
| |
Collapse
|
4
|
Bellos T, Manolitsis I, Katsimperis S, Juliebø-Jones P, Feretzakis G, Mitsogiannis I, Varkarakis I, Somani BK, Tzelves L. Artificial Intelligence in Urologic Robotic Oncologic Surgery: A Narrative Review. Cancers (Basel) 2024; 16:1775. [PMID: 38730727 PMCID: PMC11083167 DOI: 10.3390/cancers16091775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/29/2024] [Accepted: 05/02/2024] [Indexed: 05/13/2024] Open
Abstract
With the rapid increase in computer processing capacity over the past two decades, machine learning techniques have been applied in many sectors of daily life. Machine learning in therapeutic settings is also gaining popularity. We analysed current studies on machine learning in robotic urologic surgery. We searched PubMed/Medline and Google Scholar up to December 2023. Search terms included "urologic surgery", "artificial intelligence", "machine learning", "neural network", "automation", and "robotic surgery". Automatic preoperative imaging, intraoperative anatomy matching, and bleeding prediction has been a major focus. Early artificial intelligence (AI) therapeutic outcomes are promising. Robot-assisted surgery provides precise telemetry data and a cutting-edge viewing console to analyse and improve AI integration in surgery. Machine learning enhances surgical skill feedback, procedure effectiveness, surgical guidance, and postoperative prediction. Tension-sensors on robotic arms and augmented reality can improve surgery. This provides real-time organ motion monitoring, improving precision and accuracy. As datasets develop and electronic health records are used more and more, these technologies will become more effective and useful. AI in robotic surgery is intended to improve surgical training and experience. Both seek precision to improve surgical care. AI in ''master-slave'' robotic surgery offers the detailed, step-by-step examination of autonomous robotic treatments.
Collapse
Affiliation(s)
- Themistoklis Bellos
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Ioannis Manolitsis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Stamatios Katsimperis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | | | - Georgios Feretzakis
- School of Science and Technology, Hellenic Open University, 26335 Patras, Greece;
| | - Iraklis Mitsogiannis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Ioannis Varkarakis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Bhaskar K. Somani
- Department of Urology, University of Southampton, Southampton SO16 6YD, UK;
| | - Lazaros Tzelves
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| |
Collapse
|
5
|
Nigro S, Filardi M, Tafuri B, Nicolardi M, De Blasi R, Giugno A, Gnoni V, Milella G, Urso D, Zoccolella S, Logroscino G. Deep Learning-based Approach for Brainstem and Ventricular MR Planimetry: Application in Patients with Progressive Supranuclear Palsy. Radiol Artif Intell 2024; 6:e230151. [PMID: 38506619 PMCID: PMC11140505 DOI: 10.1148/ryai.230151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 02/01/2024] [Accepted: 03/06/2024] [Indexed: 03/21/2024]
Abstract
Purpose To develop a fast and fully automated deep learning (DL)-based method for the MRI planimetric segmentation and measurement of the brainstem and ventricular structures most affected in patients with progressive supranuclear palsy (PSP). Materials and Methods In this retrospective study, T1-weighted MR images in healthy controls (n = 84) were used to train DL models for segmenting the midbrain, pons, middle cerebellar peduncle (MCP), superior cerebellar peduncle (SCP), third ventricle, and frontal horns (FHs). Internal, external, and clinical test datasets (n = 305) were used to assess segmentation model reliability. DL masks from test datasets were used to automatically extract midbrain and pons areas and the width of MCP, SCP, third ventricle, and FHs. Automated measurements were compared with those manually performed by an expert radiologist. Finally, these measures were combined to calculate the midbrain to pons area ratio, MR parkinsonism index (MRPI), and MRPI 2.0, which were used to differentiate patients with PSP (n = 71) from those with Parkinson disease (PD) (n = 129). Results Dice coefficients above 0.85 were found for all brain regions when comparing manual and DL-based segmentations. A strong correlation was observed between automated and manual measurements (Spearman ρ > 0.80, P < .001). DL-based measurements showed excellent performance in differentiating patients with PSP from those with PD, with an area under the receiver operating characteristic curve above 0.92. Conclusion The automated approach successfully segmented and measured the brainstem and ventricular structures. DL-based models may represent a useful approach to support the diagnosis of PSP and potentially other conditions associated with brainstem and ventricular alterations. Keywords: MR Imaging, Brain/Brain Stem, Segmentation, Quantification, Diagnosis, Convolutional Neural Network Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Mohajer in this issue.
Collapse
Affiliation(s)
- Salvatore Nigro
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Marco Filardi
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Benedetta Tafuri
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Martina Nicolardi
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Roberto De Blasi
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Alessia Giugno
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Valentina Gnoni
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Giammarco Milella
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Daniele Urso
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Stefano Zoccolella
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Giancarlo Logroscino
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| |
Collapse
|
6
|
Gao C, Wu X, Cheng X, Madsen KH, Chu C, Yang Z, Fan L. Individualized brain mapping for navigated neuromodulation. Chin Med J (Engl) 2024; 137:508-523. [PMID: 38269482 PMCID: PMC10932519 DOI: 10.1097/cm9.0000000000002979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Indexed: 01/26/2024] Open
Abstract
ABSTRACT The brain is a complex organ that requires precise mapping to understand its structure and function. Brain atlases provide a powerful tool for studying brain circuits, discovering biological markers for early diagnosis, and developing personalized treatments for neuropsychiatric disorders. Neuromodulation techniques, such as transcranial magnetic stimulation and deep brain stimulation, have revolutionized clinical therapies for neuropsychiatric disorders. However, the lack of fine-scale brain atlases limits the precision and effectiveness of these techniques. Advances in neuroimaging and machine learning techniques have led to the emergence of stereotactic-assisted neurosurgery and navigation systems. Still, the individual variability among patients and the diversity of brain diseases make it necessary to develop personalized solutions. The article provides an overview of recent advances in individualized brain mapping and navigated neuromodulation and discusses the methodological profiles, advantages, disadvantages, and future trends of these techniques. The article concludes by posing open questions about the future development of individualized brain mapping and navigated neuromodulation.
Collapse
Affiliation(s)
- Chaohong Gao
- Sino–Danish College, University of Chinese Academy of Sciences, Beijing 100190, China
| | - Xia Wu
- Brainnetome Center, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Xinle Cheng
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100190, China
| | - Kristoffer Hougaard Madsen
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kongens Lyngby 2800, Denmark
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre 2650, Denmark
| | - Congying Chu
- Brainnetome Center, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Zhengyi Yang
- Brainnetome Center, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Lingzhong Fan
- Sino–Danish College, University of Chinese Academy of Sciences, Beijing 100190, China
- Brainnetome Center, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100190, China
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Health and Life Sciences, University of Health and Rehabilitation Sciences, Qingdao, Shandong 266000, China
| |
Collapse
|
7
|
Alipour E, Chalian M, Pooyan A, Azhideh A, Shomal Zadeh F, Jahanian H. Automatic MRI-based rotator cuff muscle segmentation using U-Nets. Skeletal Radiol 2024; 53:537-545. [PMID: 37698626 DOI: 10.1007/s00256-023-04447-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 08/29/2023] [Accepted: 08/29/2023] [Indexed: 09/13/2023]
Abstract
BACKGROUND The rotator cuff (RC) is a crucial anatomical element within the shoulder joint, facilitating an extensive array of motions while maintaining joint stability. Comprised of the subscapularis, infraspinatus, supraspinatus, and teres minor muscles, the RC plays an integral role in shoulder functionality. RC injuries represent prevalent, incapacitating conditions that impose a substantial impact on approximately 8% of the adult population in the USA. Segmentation of these muscles provides valuable anatomical information for evaluating muscle quality and allows for better treatment planning. MATERIALS AND METHODS We developed a model based on residual deep convolutional encoder-decoder U-net to segment RC muscles on oblique sagittal T1-weighted images MRI. Our data consisted of shoulder MRIs from a cohort of 157 individuals, consisting of individuals without RC tendon tear (N=79) and patients with partial RC tendon tear (N=78). We evaluated different modeling approaches. The performance of the models was evaluated by calculating the Dice coefficient on the hold out test set. RESULTS The best-performing model's median Dice coefficient was measured to be 89% (Q1:85%, Q3:96%) for the supraspinatus, 86% (Q1:82%, Q3:88%) for the subscapularis, 86% (Q1:82%, Q3:90%) for the infraspinatus, and 78% (Q1:70%, Q3:81%) for the teres minor muscle, indicating a satisfactory level of accuracy in the model's predictions. CONCLUSION Our computational models demonstrated the capability to delineate RC muscles with a level of precision akin to that of experienced radiologists. As hypothesized, the proposed algorithm exhibited superior performance when segmenting muscles with well-defined boundaries, including the supraspinatus, subscapularis, and infraspinatus muscles.
Collapse
Affiliation(s)
- Ehsan Alipour
- Department of Radiology, Division of Musculoskeletal Imaging and Intervention, University of Washington, UW Radiology-Roosevelt Clinic, 4245 Roosevelt Way NE, Box, Seattle, WA, 354755, USA
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA, USA
| | - Majid Chalian
- Department of Radiology, Division of Musculoskeletal Imaging and Intervention, University of Washington, UW Radiology-Roosevelt Clinic, 4245 Roosevelt Way NE, Box, Seattle, WA, 354755, USA.
| | - Atefe Pooyan
- Department of Radiology, Division of Musculoskeletal Imaging and Intervention, University of Washington, UW Radiology-Roosevelt Clinic, 4245 Roosevelt Way NE, Box, Seattle, WA, 354755, USA
| | - Arash Azhideh
- Department of Radiology, Division of Musculoskeletal Imaging and Intervention, University of Washington, UW Radiology-Roosevelt Clinic, 4245 Roosevelt Way NE, Box, Seattle, WA, 354755, USA
| | - Firoozeh Shomal Zadeh
- Department of Radiology, Division of Musculoskeletal Imaging and Intervention, University of Washington, UW Radiology-Roosevelt Clinic, 4245 Roosevelt Way NE, Box, Seattle, WA, 354755, USA
| | | |
Collapse
|
8
|
Liu H, Nie D, Yang J, Wang J, Tang Z. A New Multi-Atlas Based Deep Learning Segmentation Framework With Differentiable Atlas Feature Warping. IEEE J Biomed Health Inform 2024; 28:1484-1493. [PMID: 38113158 DOI: 10.1109/jbhi.2023.3344646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2023]
Abstract
Deep learning based multi-atlas segmentation (DL-MA) has achieved the state-of-the-art performance in many medical image segmentation tasks, e.g., brain parcellation. In DL-MA methods, atlas-target correspondence is the key for accurate segmentation. In most existing DL-MA methods, such correspondence is usually established using traditional or deep learning based registration methods at image level with no further feature level adaption. This could cause possible atlas-target feature inconsistency. As a result, the information from atlases often has limited positive and even counteractive impact on the final segmentation results. To tackle this issue, in this paper, we propose a new DL-MA framework, where a novel differentiable atlas feature warping module with a new smooth regularization term is presented to establish feature level atlas-target correspondence. Comparing with the existing DL-MA methods, in our framework, atlas features containing anatomical prior knowledge are more relevant to the target image feature, leading the final segmentation results to a high accuracy level. We evaluate our framework in the context of brain parcellation using two public MR brain image datasets: LPBA40 and NIREP-NA0. The experimental results demonstrate that our framework outperforms both traditional multi-atlas segmentation (MAS) and state-of-the-art DL-MA methods with statistical significance. Further ablation studies confirm the effectiveness of the proposed differentiable atlas feature warping module.
Collapse
|
9
|
Tang L, Kebaya LMN, Altamimi T, Kowalczyk A, Musabi M, Roychaudhuri S, Vahidi H, Meyerink P, de Ribaupierre S, Bhattacharya S, de Moraes LTAR, St Lawrence K, Duerden EG. Altered resting-state functional connectivity in newborns with hypoxic ischemic encephalopathy assessed using high-density functional near-infrared spectroscopy. Sci Rep 2024; 14:3176. [PMID: 38326455 PMCID: PMC10850364 DOI: 10.1038/s41598-024-53256-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 01/30/2024] [Indexed: 02/09/2024] Open
Abstract
Hypoxic-ischemic encephalopathy (HIE) results from a lack of oxygen to the brain during the perinatal period. HIE can lead to mortality and various acute and long-term morbidities. Improved bedside monitoring methods are needed to identify biomarkers of brain health. Functional near-infrared spectroscopy (fNIRS) can assess resting-state functional connectivity (RSFC) at the bedside. We acquired resting-state fNIRS data from 21 neonates with HIE (postmenstrual age [PMA] = 39.96), in 19 neonates the scans were acquired post-therapeutic hypothermia (TH), and from 20 term-born healthy newborns (PMA = 39.93). Twelve HIE neonates also underwent resting-state functional magnetic resonance imaging (fMRI) post-TH. RSFC was calculated as correlation coefficients amongst the time courses for fNIRS and fMRI data, respectively. The fNIRS and fMRI RSFC maps were comparable. RSFC patterns were then measured with graph theory metrics and compared between HIE infants and healthy controls. HIE newborns showed significantly increased clustering coefficients, network efficiency and modularity compared to controls. Using a support vector machine algorithm, RSFC features demonstrated good performance in classifying the HIE and healthy newborns in separate groups. Our results indicate the utility of fNIRS-connectivity patterns as potential biomarkers for HIE and fNIRS as a new bedside tool for newborns with HIE.
Collapse
Affiliation(s)
- Lingkai Tang
- Biomedical Engineering, Faculty of Engineering, Western University, London, ON, Canada
| | - Lilian M N Kebaya
- Neuroscience, Schulich Faculty of Medicine and Dentistry, Western University, London, ON, Canada
- Neonatal-Perinatal Medicine, Schulich Faculty of Medicine and Dentistry, Western University, London, ON, Canada
- Department of Paediatrics, Division of Neonatal-Perinatal Medicine, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Talal Altamimi
- Neonatal-Perinatal Medicine, Schulich Faculty of Medicine and Dentistry, Western University, London, ON, Canada
| | - Alexandra Kowalczyk
- Neonatal-Perinatal Medicine, Schulich Faculty of Medicine and Dentistry, Western University, London, ON, Canada
| | - Melab Musabi
- Neonatal-Perinatal Medicine, Schulich Faculty of Medicine and Dentistry, Western University, London, ON, Canada
| | - Sriya Roychaudhuri
- Neonatal-Perinatal Medicine, Schulich Faculty of Medicine and Dentistry, Western University, London, ON, Canada
| | - Homa Vahidi
- Neuroscience, Schulich Faculty of Medicine and Dentistry, Western University, London, ON, Canada
| | - Paige Meyerink
- Neonatal-Perinatal Medicine, Schulich Faculty of Medicine and Dentistry, Western University, London, ON, Canada
| | - Sandrine de Ribaupierre
- Neuroscience, Schulich Faculty of Medicine and Dentistry, Western University, London, ON, Canada
- Clinical Neurological Sciences, Schulich Faculty of Medicine and Dentistry, Western University, London, ON, Canada
| | - Soume Bhattacharya
- Neonatal-Perinatal Medicine, Schulich Faculty of Medicine and Dentistry, Western University, London, ON, Canada
| | | | - Keith St Lawrence
- Biomedical Engineering, Faculty of Engineering, Western University, London, ON, Canada
- Medical Biophysics, Schulich Faculty of Medicine and Dentistry, Western University, London, ON, Canada
| | - Emma G Duerden
- Biomedical Engineering, Faculty of Engineering, Western University, London, ON, Canada.
- Neuroscience, Schulich Faculty of Medicine and Dentistry, Western University, London, ON, Canada.
- Applied Psychology, Faculty of Education, Western University, 1137 Western Rd, London, ON, N6G 1G7, Canada.
| |
Collapse
|
10
|
He Y, Ge R, Qi X, Chen Y, Wu J, Coatrieux JL, Yang G, Li S. Learning Better Registration to Learn Better Few-Shot Medical Image Segmentation: Authenticity, Diversity, and Robustness. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:2588-2601. [PMID: 35895657 DOI: 10.1109/tnnls.2022.3190452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In this work, we address the task of few-shot medical image segmentation (MIS) with a novel proposed framework based on the learning registration to learn segmentation (LRLS) paradigm. To cope with the limitations of lack of authenticity, diversity, and robustness in the existing LRLS frameworks, we propose the better registration better segmentation (BRBS) framework with three main contributions that are experimentally shown to have substantial practical merit. First, we improve the authenticity in the registration-based generation program and propose the knowledge consistency constraint strategy that constrains the registration network to learn according to the domain knowledge. It brings the semantic-aligned and topology-preserved registration, thus allowing the generation program to output new data with great space and style authenticity. Second, we deeply studied the diversity of the generation process and propose the space-style sampling program, which introduces the modeling of the transformation path of style and space change between few atlases and numerous unlabeled images into the generation program. Therefore, the sampling on the transformation paths provides much more diverse space and style features to the generated data effectively improving the diversity. Third, we first highlight the robustness in the learning of segmentation in the LRLS paradigm and propose the mix misalignment regularization, which simulates the misalignment distortion and constrains the network to reduce the fitting degree of misaligned regions. Therefore, it builds regularization for these regions improving the robustness of segmentation learning. Without any bells and whistles, our approach achieves a new state-of-the-art performance in few-shot MIS on two challenging tasks that outperform the existing LRLS-based few-shot methods. We believe that this novel and effective framework will provide a powerful few-shot benchmark for the field of medical image and efficiently reduce the costs of medical image research. All of our code will be made publicly available online.
Collapse
|
11
|
Krokos G, Kotwal T, Malaih A, Barrington S, Jackson P, Hicks RJ, Marsden PK, Fischer BM. Evaluation of manual and automated approaches for segmentation and extraction of quantitative indices from [ 18F]FDG PET-CT images. Biomed Phys Eng Express 2024; 10:025007. [PMID: 38100790 PMCID: PMC10767880 DOI: 10.1088/2057-1976/ad160e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 11/28/2023] [Accepted: 12/15/2023] [Indexed: 12/17/2023]
Abstract
Utilisation of whole organ volumes to extract anatomical and functional information from computed tomography (CT) and positron emission tomography (PET) images may provide key information for the treatment and follow-up of cancer patients. However, manual organ segmentation, is laborious and time-consuming. In this study, a CT-based deep learning method and a multi-atlas method were evaluated for segmenting the liver and spleen on CT images to extract quantitative tracer information from Fluorine-18 fluorodeoxyglucose ([18F]FDG) PET images of 50 patients with advanced Hodgkin lymphoma (HL). Manual segmentation was used as the reference method. The two automatic methods were also compared with a manually defined volume of interest (VOI) within the organ, a technique commonly performed in clinical settings. Both automatic methods provided accurate CT segmentations, with the deep learning method outperforming the multi-atlas with a DICE coefficient of 0.93 ± 0.03 (mean ± standard deviation) in liver and 0.87 ± 0.17 in spleen compared to 0.87 ± 0.05 (liver) and 0.78 ± 0.11 (spleen) for the multi-atlas. Similarly, a mean relative error of -3.2% for the liver and -3.4% for the spleen across patients was found for the mean standardized uptake value (SUVmean) using the deep learning regions while the corresponding errors for the multi-atlas method were -4.7% and -9.2%, respectively. For the maximum SUV (SUVmax), both methods resulted in higher than 20% overestimation due to the extension of organ boundaries to include neighbouring, high-uptake regions. The conservative VOI method which did not extend into neighbouring tissues, provided a more accurate SUVmaxestimate. In conclusion, the automatic, and particularly the deep learning method could be used to rapidly extract information of the SUVmeanwithin the liver and spleen. However, activity from neighbouring organs and lesions can lead to high biases in SUVmaxand current practices of manually defining a volume of interest in the organ should be considered instead.
Collapse
Affiliation(s)
- Georgios Krokos
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Tejas Kotwal
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Afnan Malaih
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Sally Barrington
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | | | - Rodney J Hicks
- Department of Medicine, St Vincent’s Hospital Medical School, the University of Melbourne, Australia
| | - Paul K Marsden
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Barbara Malene Fischer
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Dept. Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen, Denmark
- Dept. of Clinical Medicine, University of Copenhagen, Denmark
| |
Collapse
|
12
|
Zhu J, Wang C, Teng S, Lu J, Lyu P, Zhang P, Xu J, Lu L, Teng GJ. Embedding expertise knowledge into inverse treatment planning for low-dose-rate brachytherapy of hepatic malignancies. Med Phys 2024; 51:348-362. [PMID: 37475484 DOI: 10.1002/mp.16627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 06/14/2023] [Accepted: 06/23/2023] [Indexed: 07/22/2023] Open
Abstract
BACKGROUND Leveraging the precision of its radiation dose distribution and the minimization of postoperative complications, low-dose-rate (LDR) permanent seed brachytherapy is progressively adopted in addressing hepatic malignancies. PURPOSE The present study endeavors to devise a sophisticated treatment planning system (TPS) to optimize LDR brachytherapy for hepatic lesions. METHODS Our TPS encompasses four integral modules: multi-organ segmentation, seed distribution initialization, puncture pathway selection, and inverse dose planning. By amalgamating an array of deep learning models, the segmentation module proficiently labels 17 discrete abdominal targets within the images. We introduce a knowledge-based seed distribution initialization methodology that discerns the most analogous tumor shape in the reference treatment plan from the knowledge base. Subsequently, the seed distribution from the reference plan is transmuted to the current case, thus establishing seed distribution initialization. Furthermore, we parameterize the puncture needles and seeds, while concurrently constraining the puncture needle angle through the employment of a virtual puncture panel to augment planning algorithm efficiency. We also presented a user interface that includes a range of interactive features, seamlessly integrated with the treatment planning generation function. RESULTS The multi-organ segmentation module, which is trained by 50 cases of in-house CT scans and 694 cases of publicly available CT scans, achieved average Dice of 0.80 and Hausdorff distance of 5.2 mm in testing datasets. The results demonstrate that knowledge-based initialization exhibits a marked enhancement in expediting the convergence rate. Our TPS also demonstrates a dominant advantage in dose-volume-histogram criteria and execution time in comparison to commercial TPS. CONCLUSION The study proposes an innovative treatment planning system for low-dose-rate permanent seed brachytherapy for hepatic malignancies. We show that the generated treatment plans meet clinical requirement.
Collapse
Affiliation(s)
- Jianjun Zhu
- Hanglok-Tech Co., Ltd., Hengqin, China
- Center of Interventional Radiology and Vascular Surgery, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing, China
| | | | | | - Jian Lu
- Center of Interventional Radiology and Vascular Surgery, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing, China
| | | | | | - Jun Xu
- Nanjing University of Information Science & Technology, Nanjing, China
| | - Ligong Lu
- Zhuhai People's Hospital, Zhuhai Hospital Affiliated with Jinan University, Zhuhai, Guangdong, China
| | - Gao-Jun Teng
- Center of Interventional Radiology and Vascular Surgery, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing, China
| |
Collapse
|
13
|
Luan S, Wu K, Wu Y, Zhu B, Wei W, Xue X. Accurate and robust auto-segmentation of head and neck organ-at-risks based on a novel CNN fine-tuning workflow. J Appl Clin Med Phys 2024; 25:e14248. [PMID: 38128058 PMCID: PMC10795444 DOI: 10.1002/acm2.14248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 12/08/2023] [Accepted: 12/11/2023] [Indexed: 12/23/2023] Open
Abstract
PURPOSE Obvious inconsistencies in auto-segmentations exist among various AI software. In this study, we have developed a novel convolutional neural network (CNN) fine-tuning workflow to achieve precise and robust localized segmentation. METHODS The datasets include Hubei Cancer Hospital dataset, Cetuximab Head and Neck Public Dataset, and Québec Public Dataset. Seven organs-at-risks (OARs), including brain stem, left parotid gland, esophagus, left optic nerve, optic chiasm, mandible, and pharyngeal constrictor, were selected. The auto-segmentation results from four commercial AI software were first compared with the manual delineations. Then a new multi-scale lightweight residual CNN model with an attention module (named as HN-Net) was trained and tested on 40 samples and 10 samples from Hubei Cancer Hospital, respectively. To enhance the network's accuracy and generalization ability, the fine-tuning workflow utilized an uncertainty estimation method for automatic selection of candidate samples of worthiness from Cetuximab Head and Neck Public Dataset for further training. The segmentation performances were evaluated on the Hubei Cancer Hospital dataset and/or the entire Québec Public Dataset. RESULTS A maximum difference of 0.13 and 0.7 mm in average Dice value and Hausdorff distance value for the seven OARs were observed by four AI software. The proposed HN-Net achieved an average Dice value of 0.14 higher than that of the AI software, and it also outperformed other popular CNN models (HN-Net: 0.79, U-Net: 0.78, U-Net++: 0.78, U-Net-Multi-scale: 0.77, AI software: 0.65). Additionally, the HN-Net fine-tuning workflow by using the local datasets and external public datasets further improved the automatic segmentation with the average Dice value by 0.02. CONCLUSION The delineations of commercial AI software need to be carefully reviewed, and localized further training is necessary for clinical practice. The proposed fine-tuning workflow could be feasibly adopted to implement an accurate and robust auto-segmentation model by using local datasets and external public datasets.
Collapse
Affiliation(s)
- Shunyao Luan
- Department of Radiation OncologyHubei Cancer Hospital, Tongji Medical CollegeHuazhong University of Science and TechnologyWuhanChina
- School of Integrated CircuitsLaboratory for OptoelectronicsHuazhong University of Science and TechnologyWuhanChina
| | - Kun Wu
- Department of Radiation OncologyHubei Cancer Hospital, Tongji Medical CollegeHuazhong University of Science and TechnologyWuhanChina
| | - Yuan Wu
- Department of Radiation OncologyHubei Cancer Hospital, Tongji Medical CollegeHuazhong University of Science and TechnologyWuhanChina
| | - Benpeng Zhu
- School of Integrated CircuitsLaboratory for OptoelectronicsHuazhong University of Science and TechnologyWuhanChina
| | - Wei Wei
- Department of Radiation OncologyHubei Cancer Hospital, Tongji Medical CollegeHuazhong University of Science and TechnologyWuhanChina
| | - Xudong Xue
- Department of Radiation OncologyHubei Cancer Hospital, Tongji Medical CollegeHuazhong University of Science and TechnologyWuhanChina
| |
Collapse
|
14
|
Fernandes MG, Bussink J, Wijsman R, Stam B, Monshouwer R. Estimating how contouring differences affect normal tissue complication probability modelling. Phys Imaging Radiat Oncol 2024; 29:100533. [PMID: 38292649 PMCID: PMC10825684 DOI: 10.1016/j.phro.2024.100533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 11/15/2023] [Accepted: 12/30/2023] [Indexed: 02/01/2024] Open
Abstract
Background and purpose Normal tissue complication probability (NTCP) models are developed from large retrospective datasets where automatic contouring is often used to contour the organs at risk. This study proposes a methodology to estimate how discrepancies between two sets of contours are reflected on NTCP model performance. We apply this methodology to heart contours within a dataset of non-small cell lung cancer (NSCLC) patients. Materials and methods One of the contour sets is designated the ground truth and a dosimetric parameter derived from it is used to simulate outcomes via a predefined NTCP relationship. For each simulated outcome, the selected dosimetric parameters associated with each contour set are individually used to fit a toxicity model and their performance is compared. Our dataset comprised 605 stage IIA-IIIB NSCLC patients. Manual, deep learning, and atlas-based heart contours were available. Results How contour differences were reflected in NTCP model performance depended on the slope of the predefined model, the dosimetric parameter utilized, and the size of the cohort. The impact of contour differences on NTCP model performance increased with steeper NTCP curves. In our dataset, parameters on the low range of the dose-volume histogram were more robust to contour differences. Conclusions Our methodology can be used to estimate whether a given contouring model is fit for NTCP model development. For the heart in comparable datasets, average Dice should be at least as high as between our manual and deep learning contours for shallow NTCP relationships (88.5 ± 4.5 %) and higher for steep relationships.
Collapse
Affiliation(s)
| | - Johan Bussink
- Department of Radiation Oncology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Robin Wijsman
- Department of Radiation Oncology, University Medical Center Groningen, Groningen, The Netherlands
| | - Barbara Stam
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - René Monshouwer
- Department of Radiation Oncology, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
15
|
Zhong Y, Pei Y, Nie K, Zhang Y, Xu T, Zha H. Bi-Graph Reasoning for Masticatory Muscle Segmentation From Cone-Beam Computed Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3690-3701. [PMID: 37566502 DOI: 10.1109/tmi.2023.3304557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/13/2023]
Abstract
Automated segmentation of masticatory muscles is a challenging task considering ambiguous soft tissue attachments and image artifacts of low-radiation cone-beam computed tomography (CBCT) images. In this paper, we propose a bi-graph reasoning model (BGR) for the simultaneous detection and segmentation of multi-category masticatory muscles from CBCTs. The BGR exploits the local and long-range interdependencies of regions of interest and category-specific prior knowledge of masticatory muscles by reasoning on the category graph and the region graph. The category graph of the learnable muscle prior knowledge handles high-level dependencies of muscle categories, enhancing the feature representation with noise-agnostic category knowledge. The region graph models both local and global dependencies of the candidate muscle regions of interest. The proposed BGR accommodates the high-level dependencies and enhances the region features in the presence of entangled soft tissue and image artifacts. We evaluated the proposed approach by segmenting masticatory muscles on clinically acquired CBCTs. Extensive experimental results show that the BGR effectively segments masticatory muscles with state-of-the-art accuracy.
Collapse
|
16
|
Chen X, Peng Y, Li D, Sun J. DMCA-GAN: Dual Multilevel Constrained Attention GAN for MRI-Based Hippocampus Segmentation. J Digit Imaging 2023; 36:2532-2553. [PMID: 37735310 PMCID: PMC10584805 DOI: 10.1007/s10278-023-00854-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 04/29/2023] [Accepted: 05/17/2023] [Indexed: 09/23/2023] Open
Abstract
Precise segmentation of the hippocampus is essential for various human brain activity and neurological disorder studies. To overcome the small size of the hippocampus and the low contrast of MR images, a dual multilevel constrained attention GAN for MRI-based hippocampus segmentation is proposed in this paper, which is used to provide a relatively effective balance between suppressing noise interference and enhancing feature learning. First, we design the dual-GAN backbone to effectively compensate for the spatial information damage caused by multiple pooling operations in the feature generation stage. Specifically, dual-GAN performs joint adversarial learning on the multiscale feature maps at the end of the generator, which yields an average Dice coefficient (DSC) gain of 5.95% over the baseline. Next, to suppress MRI high-frequency noise interference, a multilayer information constraint unit is introduced before feature decoding, which improves the sensitivity of the decoder to forecast features by 5.39% and effectively alleviates the network overfitting problem. Then, to refine the boundary segmentation effects, we construct a multiscale feature attention restraint mechanism, which forces the network to concentrate more on effective multiscale details, thus improving the robustness. Furthermore, the dual discriminators D1 and D2 also effectively prevent the negative migration phenomenon. The proposed DMCA-GAN obtained a DSC of 90.53% on the Medical Segmentation Decathlon (MSD) dataset with tenfold cross-validation, which is superior to the backbone by 3.78%.
Collapse
Affiliation(s)
- Xue Chen
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China
| | - Yanjun Peng
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China.
- Shandong Province Key Laboratory of Wisdom Mining Information Technology, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China.
| | - Dapeng Li
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China
| | - Jindong Sun
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China
| |
Collapse
|
17
|
Wang X, Liu S, Yang N, Chen F, Ma L, Ning G, Zhang H, Qiu X, Liao H. A Segmentation Framework With Unsupervised Learning-Based Label Mapper for the Ventricular Target of Intracranial Germ Cell Tumor. IEEE J Biomed Health Inform 2023; 27:5381-5392. [PMID: 37651479 DOI: 10.1109/jbhi.2023.3310492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Intracranial germ cell tumors are rare tumors that mainly affect children and adolescents. Radiotherapy is the cornerstone of interdisciplinary treatment methods. Radiation of the whole ventricle system and the local tumor can reduce the complications in the late stage of radiotherapy while ensuring the curative effect. However, manually delineating the ventricular system is labor-intensive and time-consuming for physicians. The diverse ventricle shape and the hydrocephalus-induced ventricle dilation increase the difficulty of automatic segmentation algorithms. Therefore, this study proposed a fully automatic segmentation framework. Firstly, we designed a novel unsupervised learning-based label mapper, which is used to handle the ventricle shape variations and obtain the preliminary segmentation result. Then, to boost the segmentation performance of the framework, we improved the region growth algorithm and combined the fully connected conditional random field to optimize the preliminary results from both regional and voxel scales. In the case of only one set of annotated data is required, the average time cost is 153.01 s, and the average target segmentation accuracy can reach 84.69%. Furthermore, we verified the algorithm in practical clinical applications. The results demonstrate that our proposed method is beneficial for physicians to delineate radiotherapy targets, which is feasible and clinically practical, and may fill the gap of automatic delineation methods for the ventricular target of intracranial germ celltumors.
Collapse
|
18
|
Jimenez-Mesa C, Arco JE, Martinez-Murcia FJ, Suckling J, Ramirez J, Gorriz JM. Applications of machine learning and deep learning in SPECT and PET imaging: General overview, challenges and future prospects. Pharmacol Res 2023; 197:106984. [PMID: 37940064 DOI: 10.1016/j.phrs.2023.106984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/04/2023] [Accepted: 11/04/2023] [Indexed: 11/10/2023]
Abstract
The integration of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging techniques with machine learning (ML) algorithms, including deep learning (DL) models, is a promising approach. This integration enhances the precision and efficiency of current diagnostic and treatment strategies while offering invaluable insights into disease mechanisms. In this comprehensive review, we delve into the transformative impact of ML and DL in this domain. Firstly, a brief analysis is provided of how these algorithms have evolved and which are the most widely applied in this domain. Their different potential applications in nuclear imaging are then discussed, such as optimization of image adquisition or reconstruction, biomarkers identification, multimodal fusion and the development of diagnostic, prognostic, and disease progression evaluation systems. This is because they are able to analyse complex patterns and relationships within imaging data, as well as extracting quantitative and objective measures. Furthermore, we discuss the challenges in implementation, such as data standardization and limited sample sizes, and explore the clinical opportunities and future horizons, including data augmentation and explainable AI. Together, these factors are propelling the continuous advancement of more robust, transparent, and reliable systems.
Collapse
Affiliation(s)
- Carmen Jimenez-Mesa
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan E Arco
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Communications Engineering, University of Malaga, 29010, Spain
| | | | - John Suckling
- Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK
| | - Javier Ramirez
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan Manuel Gorriz
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK.
| |
Collapse
|
19
|
Kebaya LMN, Kapoor B, Mayorga PC, Meyerink P, Foglton K, Altamimi T, Nichols ES, de Ribaupierre S, Bhattacharya S, Tristao L, Jurkiewicz MT, Duerden EG. Subcortical brain volumes in neonatal hypoxic-ischemic encephalopathy. Pediatr Res 2023; 94:1797-1803. [PMID: 37353661 DOI: 10.1038/s41390-023-02695-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 05/07/2023] [Accepted: 05/21/2023] [Indexed: 06/25/2023]
Abstract
BACKGROUND Despite treatment with therapeutic hypothermia, hypoxic-ischemic encephalopathy (HIE) is associated with adverse developmental outcomes, suggesting the involvement of subcortical structures including the thalamus and basal ganglia, which may be vulnerable to perinatal asphyxia, particularly during the acute period. The aims were: (1) to examine subcortical macrostructure in neonates with HIE compared to age- and sex-matched healthy neonates within the first week of life; (2) to determine whether subcortical brain volumes are associated with HIE severity. METHODS Neonates (n = 56; HIE: n = 28; Healthy newborns from the Developing Human Connectome Project: n = 28) were scanned with MRI within the first week of life. Subcortical volumes were automatically extracted from T1-weighted images. General linear models assessed between-group differences in subcortical volumes, adjusting for sex, gestational age, postmenstrual age, and total cerebral volumes. Within-group analyses evaluated the association between subcortical volumes and HIE severity. RESULTS Neonates with HIE had smaller bilateral thalamic, basal ganglia and right hippocampal and cerebellar volumes compared to controls (all, p < 0.02). Within the HIE group, mild HIE severity was associated with smaller volumes of the left and right basal ganglia (both, p < 0.007) and the left hippocampus and thalamus (both, p < 0.04). CONCLUSIONS Findings suggest that, despite advances in neonatal care, HIE is associated with significant alterations in subcortical brain macrostructure. IMPACT Compared to their healthy counterparts, infants with HIE demonstrate significant alterations in subcortical brain macrostructure on MRI acquired as early as 4 days after birth. Smaller subcortical volumes impacting sensory and motor regions, including the thalamus, basal ganglia, and cerebellum, were seen in infants with HIE. Mild and moderate HIE were associated with smaller subcortical volumes.
Collapse
Affiliation(s)
- Lilian M N Kebaya
- Neuroscience program, Western University, London, ON, Canada.
- Division of Neonatal-Perinatal Medicine, Department of Paediatrics, London Health Sciences Centre, London, ON, Canada.
| | - Bhavya Kapoor
- Applied Psychology, Faculty of Education, Western University, London, ON, Canada
- Western Institute for Neuroscience, Western University, London, ON, Canada
| | - Paula Camila Mayorga
- Division of Neonatal-Perinatal Medicine, Department of Paediatrics, London Health Sciences Centre, London, ON, Canada
| | - Paige Meyerink
- Division of Neonatal-Perinatal Medicine, Department of Paediatrics, London Health Sciences Centre, London, ON, Canada
| | - Kathryn Foglton
- Division of Neonatal-Perinatal Medicine, Department of Paediatrics, London Health Sciences Centre, London, ON, Canada
| | - Talal Altamimi
- Division of Neonatal-Perinatal Medicine, Department of Paediatrics, London Health Sciences Centre, London, ON, Canada
- Division of Neonatal Intensive Care, Department of Pediatrics, College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Emily S Nichols
- Applied Psychology, Faculty of Education, Western University, London, ON, Canada
- Western Institute for Neuroscience, Western University, London, ON, Canada
| | - Sandrine de Ribaupierre
- Neuroscience program, Western University, London, ON, Canada
- Western Institute for Neuroscience, Western University, London, ON, Canada
- Department of Clinical Neurological Sciences, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
- Children's Health Research Institute, London, ON, Canada
| | - Soume Bhattacharya
- Division of Neonatal-Perinatal Medicine, Department of Paediatrics, London Health Sciences Centre, London, ON, Canada
| | - Leandro Tristao
- Department of Medical Imaging, London Health Sciences Centre, London, ON, Canada
| | - Michael T Jurkiewicz
- Neuroscience program, Western University, London, ON, Canada
- Western Institute for Neuroscience, Western University, London, ON, Canada
- Department of Clinical Neurological Sciences, Schulich School of Medicine and Dentistry, Western University, London, ON, Canada
- Department of Medical Imaging, London Health Sciences Centre, London, ON, Canada
| | - Emma G Duerden
- Neuroscience program, Western University, London, ON, Canada
- Applied Psychology, Faculty of Education, Western University, London, ON, Canada
- Western Institute for Neuroscience, Western University, London, ON, Canada
- Children's Health Research Institute, London, ON, Canada
| |
Collapse
|
20
|
Marin-Castrillon DM, Geronzi L, Boucher A, Lin S, Morgant MC, Cochet A, Rochette M, Leclerc S, Ambarki K, Jin N, Aho LS, Lalande A, Bouchot O, Presles B. Segmentation of the aorta in systolic phase from 4D flow MRI: multi-atlas vs. deep learning. MAGMA (NEW YORK, N.Y.) 2023; 36:687-700. [PMID: 36800143 DOI: 10.1007/s10334-023-01066-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 11/26/2022] [Accepted: 01/24/2023] [Indexed: 02/18/2023]
Abstract
OBJECTIVE In the management of the aortic aneurysm, 4D flow magnetic resonance Imaging provides valuable information for the computation of new biomarkers using computational fluid dynamics (CFD). However, accurate segmentation of the aorta is required. Thus, our objective is to evaluate the performance of two automatic segmentation methods on the calculation of aortic wall pressure. METHODS Automatic segmentation of the aorta was performed with methods based on deep learning and multi-atlas using the systolic phase in the 4D flow MRI magnitude image of 36 patients. Using mesh morphing, isotopological meshes were generated, and CFD was performed to calculate the aortic wall pressure. Node-to-node comparisons of the pressure results were made to identify the most robust automatic method respect to the pressures obtained with a manually segmented model. RESULTS Deep learning approach presented the best segmentation performance with a mean Dice similarity coefficient and a mean Hausdorff distance (HD) equal to 0.92+/- 0.02 and 21.02+/- 24.20 mm, respectively. At the global level HD is affected by the performance in the abdominal aorta. Locally, this distance decreases to 9.41+/- 3.45 and 5.82+/- 6.23 for the ascending and descending thoracic aorta, respectively. Moreover, with respect to the pressures from the manual segmentations, the differences in the pressures computed from deep learning were lower than those computed from multi-atlas method. CONCLUSION To reduce biases in the calculation of aortic wall pressure, accurate segmentation is needed, particularly in regions with high blood flow velocities. Thus, the deep learning segmen-tation method should be preferred.
Collapse
Affiliation(s)
| | | | - Arnaud Boucher
- Imaging and Artificial Vision Research Laboratory, University of Burgundy, Dijon, France
| | - Siyu Lin
- Imaging and Artificial Vision Research Laboratory, University of Burgundy, Dijon, France
| | - Marie-Catherine Morgant
- Imaging and Artificial Vision Research Laboratory, University of Burgundy, Dijon, France
- Department of cardiovascular and thoracic surgery, University Hospital of Dijon, Dijon, France
| | - Alexandre Cochet
- Imaging and Artificial Vision Research Laboratory, University of Burgundy, Dijon, France
- Medical Imaging Department, University Hospital of Dijon, Dijon, France
| | | | - Sarah Leclerc
- Imaging and Artificial Vision Research Laboratory, University of Burgundy, Dijon, France
| | | | - Ning Jin
- Siemens Medical Solutions, Nancy, France
| | - Ludwig Serge Aho
- Department of Epidemiology and Hygiene, University Hospital of Dijon, Dijon, France
| | - Alain Lalande
- Imaging and Artificial Vision Research Laboratory, University of Burgundy, Dijon, France
- Medical Imaging Department, University Hospital of Dijon, Dijon, France
| | - Olivier Bouchot
- Imaging and Artificial Vision Research Laboratory, University of Burgundy, Dijon, France
- Department of cardiovascular and thoracic surgery, University Hospital of Dijon, Dijon, France
| | - Benoit Presles
- Imaging and Artificial Vision Research Laboratory, University of Burgundy, Dijon, France.
| |
Collapse
|
21
|
Krokos G, MacKewn J, Dunn J, Marsden P. A review of PET attenuation correction methods for PET-MR. EJNMMI Phys 2023; 10:52. [PMID: 37695384 PMCID: PMC10495310 DOI: 10.1186/s40658-023-00569-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/07/2023] [Indexed: 09/12/2023] Open
Abstract
Despite being thirteen years since the installation of the first PET-MR system, the scanners constitute a very small proportion of the total hybrid PET systems installed. This is in stark contrast to the rapid expansion of the PET-CT scanner, which quickly established its importance in patient diagnosis within a similar timeframe. One of the main hurdles is the development of an accurate, reproducible and easy-to-use method for attenuation correction. Quantitative discrepancies in PET images between the manufacturer-provided MR methods and the more established CT- or transmission-based attenuation correction methods have led the scientific community in a continuous effort to develop a robust and accurate alternative. These can be divided into four broad categories: (i) MR-based, (ii) emission-based, (iii) atlas-based and the (iv) machine learning-based attenuation correction, which is rapidly gaining momentum. The first is based on segmenting the MR images in various tissues and allocating a predefined attenuation coefficient for each tissue. Emission-based attenuation correction methods aim in utilising the PET emission data by simultaneously reconstructing the radioactivity distribution and the attenuation image. Atlas-based attenuation correction methods aim to predict a CT or transmission image given an MR image of a new patient, by using databases containing CT or transmission images from the general population. Finally, in machine learning methods, a model that could predict the required image given the acquired MR or non-attenuation-corrected PET image is developed by exploiting the underlying features of the images. Deep learning methods are the dominant approach in this category. Compared to the more traditional machine learning, which uses structured data for building a model, deep learning makes direct use of the acquired images to identify underlying features. This up-to-date review goes through the literature of attenuation correction approaches in PET-MR after categorising them. The various approaches in each category are described and discussed. After exploring each category separately, a general overview is given of the current status and potential future approaches along with a comparison of the four outlined categories.
Collapse
Affiliation(s)
- Georgios Krokos
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK.
| | - Jane MacKewn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Joel Dunn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Paul Marsden
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| |
Collapse
|
22
|
Gao L, Yusufaly TI, Williamson CW, Mell LK. Optimized Atlas-Based Auto-Segmentation of Bony Structures from Whole-Body Computed Tomography. Pract Radiat Oncol 2023; 13:e442-e450. [PMID: 37030539 DOI: 10.1016/j.prro.2023.03.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 03/14/2023] [Accepted: 03/15/2023] [Indexed: 04/09/2023]
Abstract
PURPOSE To develop and test a method for fully automated segmentation of bony structures from whole-body computed tomography (CT) and evaluate its performance compared with manual segmentation. METHODS AND MATERIALS We developed a workflow for automatic whole-body bone segmentation using atlas-based segmentation (ABS) method with a postprocessing module (ABSPP) in MIM MAESTRO software. Fifty-two CT scans comprised the training set to build the atlas library, and 29 CT scans comprised the test set. To validate the workflow, we compared Dice similarity coefficient (DSC), mean distance to agreement, and relative volume errors between ABSPP and ABS with no postprocessing (ABSNPP) with manual segmentation as the reference (gold standard). RESULTS The ABSPP method resulted in significantly improved segmentation accuracy (DSC range, 0.85-0.98) compared with the ABSNPP method (DSC range, 0.55-0.87; P < .001). Mean distance to agreement results also indicated high agreement between ABSPP and manual reference delineations (range, 0.11-1.56 mm), which was significantly improved compared with ABSNPP (range, 1.00-2.34 mm) for the majority of tested bony structures. Relative volume errors were also significantly lower for ABSPP compared with ABSNPP for most bony structures. CONCLUSIONS We developed a fully automated MIM workflow for bony structure segmentation from whole-body CT, which exhibited high accuracy compared with manual delineation. The integrated postprocessing module significantly improved workflow performance.
Collapse
Affiliation(s)
- Lei Gao
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California
| | - Tahir I Yusufaly
- Russell H. Morgan Department of Radiology and Radiologic Sciences, Johns Hopkins University, School of Medicine, Baltimore, Maryland
| | - Casey W Williamson
- Department of Radiation Medicine, Oregon Health Sciences University, Portland, Oregon
| | - Loren K Mell
- Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California.
| |
Collapse
|
23
|
Welgemoed C, Spezi E, Riddle P, Gooding MJ, Gujral D, McLauchlan R, Aboagye EO. Clinical evaluation of atlas-based auto-segmentation in breast and nodal radiotherapy. Br J Radiol 2023; 96:20230040. [PMID: 37493138 PMCID: PMC10461279 DOI: 10.1259/bjr.20230040] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 06/23/2023] [Accepted: 06/28/2023] [Indexed: 07/27/2023] Open
Abstract
OBJECTIVES Accurate contouring of anatomical structures allows for high-precision radiotherapy planning, targeting the dose at treatment volumes and avoiding organs at risk. Manual contouring is time-consuming with significant user variability, whereas auto-segmentation (AS) has proven efficiency benefits but requires editing before treatment planning. This study investigated whether atlas-based AS (ABAS) accuracy improves with template atlas group size and character-specific atlas and test case selection. METHODS AND MATERIALS One clinician retrospectively contoured the breast, nodes, lung, heart, and brachial plexus on 100 CT scans, adhering to peer-reviewed guidelines. Atlases were clustered in group sizes, treatment positions, chest wall separations, and ASs created with Mirada software. The similarity of ASs compared to reference contours was described by the Jaccard similarity coefficient (JSC) and centroid distance variance (CDV). RESULTS Across group sizes, for all structures combined, the mean JSC was 0.6 (SD 0.3, p = .999). Across atlas-specific groups, 0.6 (SD 0.3, p = 1.000). The correlation between JSC and structure volume was weak in both scenarios (adjusted R2-0.007 and 0.185).Mean CDV was similar across groups but varied up to 1.2 cm for specific structures. CONCLUSIONS Character-specific atlas groups and test case selection did not improve accuracy outcomes. High-quality ASs were obtained from groups containing as few as ten atlases, subsequently simplifying the application of ABAS. CDV measures indicating auto-segmentation variations on the x, y, and z axes can be utilised to decide on the clinical relevance of variations and reduce AS editing. ADVANCES IN KNOWLEDGE High-quality ABASs can be obtained from as few as ten template atlases.Atlas and test case selection do not improve AS accuracy.Unlike well-known quantitative similarity indices, volume displacement metrics provide information on the location of segmentation variations, helping assessment of the clinical relevance of variations and reducing clinician editing. Volume displacement metrics combined with the qualitative measure of clinician assessment could reduce user variability.
Collapse
Affiliation(s)
| | - Emiliano Spezi
- School of Engineering, Cardiff University, Cardiff, United Kingdom
| | - Pippa Riddle
- Radiotherapy Department, Imperial College Healthcare NHS Trust, Charing Cross Hospital, London, United Kingdom
| | | | | | | | - Eric O Aboagye
- Department of Surgery and Cancer, Imperial College London, Hammersmith Campus, London, United Kingdom
| |
Collapse
|
24
|
Amiri S, Abdolali F, Neshastehriz A, Nikoofar A, Farahani S, Firoozabadi LA, Askarabad ZA, Cheraghi S. A machine learning approach for prediction of auditory brain stem response in patients after head-and-neck radiation therapy. J Cancer Res Ther 2023; 19:1219-1225. [PMID: 37787286 DOI: 10.4103/jcrt.jcrt_2298_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Objective The present study aimed to assess machine learning (ML) models according to radiomic features to predict ototoxicity using auditory brain stem responses (ABRs) in patients with radiation therapy (RT) for head-and-neck cancers. Materials and Methods The ABR test was performed on 50 patients having head-and-neck RT. Radiomic features were extracted from the brain stem in computed tomography images to generate a radiomic signature. Moreover, accuracy, sensitivity, specificity, the area under the curve, and mean cross-validation were used to evaluate six different ML models. Results Out of 50 patients, 21 participants experienced ototoxicity. Furthermore, 140 radiomic features were extracted from the segmented area. Among the six ML models, the Random Forest method with 77% accuracy provided the best result. Conclusion According to the ML approach, we showed the relatively high prediction power of the radiomic features in radiation-induced ototoxicity. To better predict the outcomes, future studies on a larger number of participants are recommended.
Collapse
Affiliation(s)
- Sepideh Amiri
- Department of Computer Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Fatemeh Abdolali
- Department of Radiology and Diagnostic Imaging, Faculty of Medicine and Dentistry, Alberta University, Edmonton, AB, Canada
| | - Ali Neshastehriz
- Radiation Biology Research Center; Department of Radiation Sciences, Faculty of Allied Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Alireza Nikoofar
- Department of Radiation Oncology, Faculty of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Saeid Farahani
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran
| | - Leila Alipour Firoozabadi
- Department of Radiation Sciences, Faculty of Allied Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Zahra Alaei Askarabad
- Department of Radiation Sciences, Faculty of Allied Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Susan Cheraghi
- Radiation Biology Research Center; Department of Radiation Sciences, Faculty of Allied Medicine, Iran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
25
|
Sha X, Wang H, Sha H, Xie L, Zhou Q, Zhang W, Yin Y. Clinical target volume and organs at risk segmentation for rectal cancer radiotherapy using the Flex U-Net network. Front Oncol 2023; 13:1172424. [PMID: 37324028 PMCID: PMC10266488 DOI: 10.3389/fonc.2023.1172424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 05/05/2023] [Indexed: 06/17/2023] Open
Abstract
Purpose/Objectives The aim of this study was to improve the accuracy of the clinical target volume (CTV) and organs at risk (OARs) segmentation for rectal cancer preoperative radiotherapy. Materials/Methods Computed tomography (CT) scans from 265 rectal cancer patients treated at our institution were collected to train and validate automatic contouring models. The regions of CTV and OARs were delineated by experienced radiologists as the ground truth. We improved the conventional U-Net and proposed Flex U-Net, which used a register model to correct the noise caused by manual annotation, thus refining the performance of the automatic segmentation model. Then, we compared its performance with that of U-Net and V-Net. The Dice similarity coefficient (DSC), Hausdorff distance (HD), and average symmetric surface distance (ASSD) were calculated for quantitative evaluation purposes. With a Wilcoxon signed-rank test, we found that the differences between our method and the baseline were statistically significant (P< 0.05). Results Our proposed framework achieved DSC values of 0.817 ± 0.071, 0.930 ± 0.076, 0.927 ± 0.03, and 0.925 ± 0.03 for CTV, the bladder, Femur head-L and Femur head-R, respectively. Conversely, the baseline results were 0.803 ± 0.082, 0.917 ± 0.105, 0.923 ± 0.03 and 0.917 ± 0.03, respectively. Conclusion In conclusion, our proposed Flex U-Net can enable satisfactory CTV and OAR segmentation for rectal cancer and yield superior performance compared to conventional methods. This method provides an automatic, fast and consistent solution for CTV and OAR segmentation and exhibits potential to be widely applied for radiation therapy planning for a variety of cancers.
Collapse
Affiliation(s)
- Xue Sha
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Hui Wang
- Department of Radiation Oncology, Qingdao Central Hospital, Qingdao, Shandong, China
| | - Hui Sha
- Hunan Cancer Hospital, Xiangya School of Medicine, Central South University, Changsha, Hunan, China
| | - Lu Xie
- Manteia Technologies Co., Ltd, Xiamen, Fujian, China
| | - Qichao Zhou
- Manteia Technologies Co., Ltd, Xiamen, Fujian, China
| | - Wei Zhang
- Manteia Technologies Co., Ltd, Xiamen, Fujian, China
| | - Yong Yin
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| |
Collapse
|
26
|
Haueise T, Schick F, Stefan N, Schlett CL, Weiss JB, Nattenmüller J, Göbel-Guéniot K, Norajitra T, Nonnenmacher T, Kauczor HU, Maier-Hein KH, Niendorf T, Pischon T, Jöckel KH, Umutlu L, Peters A, Rospleszcz S, Kröncke T, Hosten N, Völzke H, Krist L, Willich SN, Bamberg F, Machann J. Analysis of volume and topography of adipose tissue in the trunk: Results of MRI of 11,141 participants in the German National Cohort. SCIENCE ADVANCES 2023; 9:eadd0433. [PMID: 37172093 PMCID: PMC10181183 DOI: 10.1126/sciadv.add0433] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
This research addresses the assessment of adipose tissue (AT) and spatial distribution of visceral (VAT) and subcutaneous fat (SAT) in the trunk from standardized magnetic resonance imaging at 3 T, thereby demonstrating the feasibility of deep learning (DL)-based image segmentation in a large population-based cohort in Germany (five sites). Volume and distribution of AT play an essential role in the pathogenesis of insulin resistance, a risk factor of developing metabolic/cardiovascular diseases. Cross-validated training of the DL-segmentation model led to a mean Dice similarity coefficient of >0.94, corresponding to a mean absolute volume deviation of about 22 ml. SAT is significantly increased in women compared to men, whereas VAT is increased in males. Spatial distribution shows age- and body mass index-related displacements. DL-based image segmentation provides robust and fast quantification of AT (≈15 s per dataset versus 3 to 4 hours for manual processing) and assessment of its spatial distribution from magnetic resonance images in large cohort studies.
Collapse
Affiliation(s)
- Tobias Haueise
- Institute for Diabetes Research and Metabolic Diseases, Helmholtz Center Munich at the University of Tuebingen, Tuebingen, Germany
- German Center for Diabetes Research (DZD), Tuebingen, Germany
- Section on Experimental Radiology, Department of Diagnostic and Interventional Radiology, University Hospital Tuebingen, Tuebingen, Germany
| | - Fritz Schick
- Institute for Diabetes Research and Metabolic Diseases, Helmholtz Center Munich at the University of Tuebingen, Tuebingen, Germany
- German Center for Diabetes Research (DZD), Tuebingen, Germany
- Section on Experimental Radiology, Department of Diagnostic and Interventional Radiology, University Hospital Tuebingen, Tuebingen, Germany
| | - Norbert Stefan
- Institute for Diabetes Research and Metabolic Diseases, Helmholtz Center Munich at the University of Tuebingen, Tuebingen, Germany
- German Center for Diabetes Research (DZD), Tuebingen, Germany
- Department of Internal Medicine, Division of Diabetology, Endocrinology and Nephrology, Eberhard-Karls University Tuebingen, Tuebingen, Germany
| | - Christopher L Schlett
- Department of Diagnostic and Interventional Radiology, Medical Center-University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Jakob B Weiss
- Department of Diagnostic and Interventional Radiology, Medical Center-University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Johanna Nattenmüller
- Department of Diagnostic and Interventional Radiology, Medical Center-University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
- Department of Diagnostic and Interventional Radiology, University Hospital Heidelberg, Heidelberg, Germany
| | - Katharina Göbel-Guéniot
- Department of Diagnostic and Interventional Radiology, Medical Center-University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Tobias Norajitra
- Division of Medical and Biological Informatics, German Cancer Research Center, Heidelberg, Germany
| | - Tobias Nonnenmacher
- Department of Diagnostic and Interventional Radiology, University Hospital Heidelberg, Heidelberg, Germany
| | - Hans-Ulrich Kauczor
- Department of Diagnostic and Interventional Radiology, University Hospital Heidelberg, Heidelberg, Germany
| | - Klaus H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Thoralf Niendorf
- Berlin Ultrahigh Field Facility (B.U.F.F.), Max-Delbrueck Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany
- Experimental and Clinical Research Center, A Joint Cooperation Between the Charité Medical Faculty and the Max-Delbrueck Center for Molecular Medicine in the Helmholtz Association, Berlin, Germany
| | - Tobias Pischon
- Max-Delbrueck-Center for Molecular Medicine in the Helmholtz Association (MDC), Molecular Epidemiology Research Group, Berlin, Germany
- Max-Delbrueck-Center for Molecular Medicine in the Helmholtz Association (MDC), Biobank Technology Platform, Berlin, Germany
- Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Core Facility Biobank, Berlin, Germany
- Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Karl-Heinz Jöckel
- Institute for Medical Informatics, Biometry and Epidemiology (IMIBE), University Hospital Essen, Essen, Germany
| | - Lale Umutlu
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - Annette Peters
- Department of Epidemiology, Institute for Medical Information Processing, Biometry and Epidemiology, Ludwig-Maximilians-Universität München, Munich, Germany
- Institute of Epidemiology, Helmholtz Center Munich, German Research Center for Environmental Health, Neuherberg, Germany
- German Center for Cardiovascular Research (DZHK), Partner Site Munich Heart Alliance, Munich, Germany
- German Center for Diabetes Research (DZD), Partner Site Neuherberg, Neuherberg, Germany
| | - Susanne Rospleszcz
- Department of Epidemiology, Institute for Medical Information Processing, Biometry and Epidemiology, Ludwig-Maximilians-Universität München, Munich, Germany
- Institute of Epidemiology, Helmholtz Center Munich, German Research Center for Environmental Health, Neuherberg, Germany
- German Center for Cardiovascular Research (DZHK), Partner Site Munich Heart Alliance, Munich, Germany
| | - Thomas Kröncke
- Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Faculty of Medicine, University of Augsburg, Augsburg, Germany
- Centre for Advanced Analytics and Predictive Sciences (CAAPS), University Augsburg, Augsburg, Germany
| | - Norbert Hosten
- Institute of Diagnostic Radiology and Neuroradiology, University Medicine Greifswald, Greifswald, Germany
| | - Henry Völzke
- Institute for Community Medicine, University Medicine Greifswald, Greifswald, Germany
- German Centre for Cardiovascular Research (DZHK), Partner Site Greifswald, Greifswald, Germany
| | - Lilian Krist
- Institute of Social Medicine, Epidemiology and Health Economics, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Stefan N Willich
- Institute of Social Medicine, Epidemiology and Health Economics, Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, Medical Center-University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Juergen Machann
- Institute for Diabetes Research and Metabolic Diseases, Helmholtz Center Munich at the University of Tuebingen, Tuebingen, Germany
- German Center for Diabetes Research (DZD), Tuebingen, Germany
- Section on Experimental Radiology, Department of Diagnostic and Interventional Radiology, University Hospital Tuebingen, Tuebingen, Germany
| |
Collapse
|
27
|
Gao H, Lyu M, Zhao X, Yang F, Bai X. Contour-aware network with class-wise convolutions for 3D abdominal multi-organ segmentation. Med Image Anal 2023; 87:102838. [PMID: 37196536 DOI: 10.1016/j.media.2023.102838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 03/21/2023] [Accepted: 05/05/2023] [Indexed: 05/19/2023]
Abstract
Accurate delineation of multiple organs is a critical process for various medical procedures, which could be operator-dependent and time-consuming. Existing organ segmentation methods, which were mainly inspired by natural image analysis techniques, might not fully exploit the traits of the multi-organ segmentation task and could not accurately segment the organs with various shapes and sizes simultaneously. In this work, the characteristics of multi-organ segmentation are considered: the global count, position and scale of organs are generally predictable, while their local shape and appearance are volatile. Thus, we supplement the region segmentation backbone with a contour localization task to increase the certainty along delicate boundaries. Meantime, each organ has exclusive anatomical traits, which motivates us to deal with class variability with class-wise convolutions to highlight organ-specific features and suppress irrelevant responses at different field-of-views. To validate our method with adequate amounts of patients and organs, we constructed a multi-center dataset, which contains 110 3D CT scans with 24,528 axial slices, and provided voxel-level manual segmentations of 14 abdominal organs, which adds up to 1,532 3D structures in total. Extensive ablation and visualization studies on it validate the effectiveness of the proposed method. Quantitative analysis shows that we achieve state-of-the-art performance for most abdominal organs, and obtain 3.63 mm 95% Hausdorff Distance and 83.32% Dice Similarity Coefficient on an average.
Collapse
Affiliation(s)
- Hongjian Gao
- Image Processing Center, Beihang University, Beijing 102206, China
| | - Mengyao Lyu
- School of Software, Tsinghua University, Beijing 100084, China; Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
| | - Xinyue Zhao
- School of Medical Imaging, Xuzhou Medical University, Xuzhou 221004, China
| | - Fan Yang
- Image Processing Center, Beihang University, Beijing 102206, China
| | - Xiangzhi Bai
- Image Processing Center, Beihang University, Beijing 102206, China; State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China; Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China.
| |
Collapse
|
28
|
Ling S, Blackburn BJ, Jenkins MW, Watanabe M, Ford SM, Lapierre-Landry M, Rollins AM. Segmentation of beating embryonic heart structures from 4-D OCT images using deep learning. BIOMEDICAL OPTICS EXPRESS 2023; 14:1945-1958. [PMID: 37206115 PMCID: PMC10191668 DOI: 10.1364/boe.481657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 01/29/2023] [Accepted: 02/20/2023] [Indexed: 05/21/2023]
Abstract
Optical coherence tomography (OCT) has been used to investigate heart development because of its capability to image both structure and function of beating embryonic hearts. Cardiac structure segmentation is a prerequisite for the quantification of embryonic heart motion and function using OCT. Since manual segmentation is time-consuming and labor-intensive, an automatic method is needed to facilitate high-throughput studies. The purpose of this study is to develop an image-processing pipeline to facilitate the segmentation of beating embryonic heart structures from a 4-D OCT dataset. Sequential OCT images were obtained at multiple planes of a beating quail embryonic heart and reassembled to a 4-D dataset using image-based retrospective gating. Multiple image volumes at different time points were selected as key-volumes, and their cardiac structures including myocardium, cardiac jelly, and lumen, were manually labeled. Registration-based data augmentation was used to synthesize additional labeled image volumes by learning transformations between key-volumes and other unlabeled volumes. The synthesized labeled images were then used to train a fully convolutional network (U-Net) for heart structure segmentation. The proposed deep learning-based pipeline achieved high segmentation accuracy with only two labeled image volumes and reduced the time cost of segmenting one 4-D OCT dataset from a week to two hours. Using this method, one could carry out cohort studies that quantify complex cardiac motion and function in developing hearts.
Collapse
Affiliation(s)
- Shan Ling
- Department of Biomedical Engineering, School of Engineering and School of Medicine, Case Western Reserve University, Cleveland, Ohio, USA
| | - Brecken J. Blackburn
- Department of Biomedical Engineering, School of Engineering and School of Medicine, Case Western Reserve University, Cleveland, Ohio, USA
| | - Michael W. Jenkins
- Department of Biomedical Engineering, School of Engineering and School of Medicine, Case Western Reserve University, Cleveland, Ohio, USA
- Department of Pediatrics, School of Medicine, Case Western Reserve University, Cleveland, Ohio, USA
| | - Michiko Watanabe
- Department of Pediatrics, School of Medicine, Case Western Reserve University, Cleveland, Ohio, USA
- Division of Pediatric Cardiology, The Congenital Heart Collaborative, Rainbow Babies and Children’s Hospital, Cleveland, Ohio, USA
| | - Stephanie M. Ford
- Department of Pediatrics, School of Medicine, Case Western Reserve University, Cleveland, Ohio, USA
- Division of Pediatric Cardiology, The Congenital Heart Collaborative, Rainbow Babies and Children’s Hospital, Cleveland, Ohio, USA
- Division of Neonatology, Rainbow Babies and Children’s Hospital, Cleveland, Ohio, USA
| | - Maryse Lapierre-Landry
- Department of Biomedical Engineering, School of Engineering and School of Medicine, Case Western Reserve University, Cleveland, Ohio, USA
| | - Andrew M. Rollins
- Department of Biomedical Engineering, School of Engineering and School of Medicine, Case Western Reserve University, Cleveland, Ohio, USA
| |
Collapse
|
29
|
Iglesias JE. A ready-to-use machine learning tool for symmetric multi-modality registration of brain MRI. Sci Rep 2023; 13:6657. [PMID: 37095168 PMCID: PMC10126156 DOI: 10.1038/s41598-023-33781-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 04/19/2023] [Indexed: 04/26/2023] Open
Abstract
Volumetric registration of brain MRI is routinely used in human neuroimaging, e.g., to align different MRI modalities, to measure change in longitudinal analysis, to map an individual to a template, or in registration-based segmentation. Classical registration techniques based on numerical optimization have been very successful in this domain, and are implemented in widespread software suites like ANTs, Elastix, NiftyReg, or DARTEL. Over the last 7-8 years, learning-based techniques have emerged, which have a number of advantages like high computational efficiency, potential for higher accuracy, easy integration of supervision, and the ability to be part of a meta-architectures. However, their adoption in neuroimaging pipelines has so far been almost inexistent. Reasons include: lack of robustness to changes in MRI modality and resolution; lack of robust affine registration modules; lack of (guaranteed) symmetry; and, at a more practical level, the requirement of deep learning expertise that may be lacking at neuroimaging research sites. Here, we present EasyReg, an open-source, learning-based registration tool that can be easily used from the command line without any deep learning expertise or specific hardware. EasyReg combines the features of classical registration tools, the capabilities of modern deep learning methods, and the robustness to changes in MRI modality and resolution provided by our recent work in domain randomization. As a result, EasyReg is: fast; symmetric; diffeomorphic (and thus invertible); agnostic to MRI modality and resolution; compatible with affine and nonlinear registration; and does not require any preprocessing or parameter tuning. We present results on challenging registration tasks, showing that EasyReg is as accurate as classical methods when registering 1 mm isotropic scans within MRI modality, but much more accurate across modalities and resolutions. EasyReg is publicly available as part of FreeSurfer; see https://surfer.nmr.mgh.harvard.edu/fswiki/EasyReg .
Collapse
Affiliation(s)
- Juan Eugenio Iglesias
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02129, USA.
- Department of Medical Physics and Biomedical Engineering, University College London, London, WC1V 6LJ, UK.
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, 02139, USA.
| |
Collapse
|
30
|
Finnegan RN, Chin V, Chlap P, Haidar A, Otton J, Dowling J, Thwaites DI, Vinod SK, Delaney GP, Holloway L. Open-source, fully-automated hybrid cardiac substructure segmentation: development and optimisation. Phys Eng Sci Med 2023; 46:377-393. [PMID: 36780065 PMCID: PMC10030448 DOI: 10.1007/s13246-023-01231-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 01/30/2023] [Indexed: 02/14/2023]
Abstract
Radiotherapy for thoracic and breast tumours is associated with a range of cardiotoxicities. Emerging evidence suggests cardiac substructure doses may be more predictive of specific outcomes, however, quantitative data necessary to develop clinical planning constraints is lacking. Retrospective analysis of patient data is required, which relies on accurate segmentation of cardiac substructures. In this study, a novel model was designed to deliver reliable, accurate, and anatomically consistent segmentation of 18 cardiac substructures on computed tomography (CT) scans. Thirty manually contoured CT scans were included. The proposed multi-stage method leverages deep learning (DL), multi-atlas mapping, and geometric modelling to automatically segment the whole heart, cardiac chambers, great vessels, heart valves, coronary arteries, and conduction nodes. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), mean distance to agreement (MDA), Hausdorff distance (HD), and volume ratio. Performance was reliable, with no errors observed and acceptable variation in accuracy between cases, including in challenging cases with imaging artefacts and atypical patient anatomy. The median DSC range was 0.81-0.93 for whole heart and cardiac chambers, 0.43-0.76 for great vessels and conduction nodes, and 0.22-0.53 for heart valves. For all structures the median MDA was below 6 mm, median HD ranged 7.7-19.7 mm, and median volume ratio was close to one (0.95-1.49) for all structures except the left main coronary artery (2.07). The fully automatic algorithm takes between 9 and 23 min per case. The proposed fully-automatic method accurately delineates cardiac substructures on radiotherapy planning CT scans. Robust and anatomically consistent segmentations, particularly for smaller structures, represents a major advantage of the proposed segmentation approach. The open-source software will facilitate more precise evaluation of cardiac doses and risks from available clinical datasets.
Collapse
Affiliation(s)
- Robert N Finnegan
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, NSW, Australia.
- Institute of Medical Physics, School of Physics, University of Sydney, Sydney, NSW, Australia.
- Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia.
| | - Vicky Chin
- Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia
- Liverpool Cancer Therapy Centre, South Western Sydney Local Health District, Liverpool, NSW, Australia
- South Western Sydney Clinical School, University of New South Wales, Sydney, NSW, Australia
| | - Phillip Chlap
- Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia
- Liverpool Cancer Therapy Centre, South Western Sydney Local Health District, Liverpool, NSW, Australia
- South Western Sydney Clinical School, University of New South Wales, Sydney, NSW, Australia
| | - Ali Haidar
- Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia
- Liverpool Cancer Therapy Centre, South Western Sydney Local Health District, Liverpool, NSW, Australia
- South Western Sydney Clinical School, University of New South Wales, Sydney, NSW, Australia
| | - James Otton
- South Western Sydney Clinical School, University of New South Wales, Sydney, NSW, Australia
| | - Jason Dowling
- Institute of Medical Physics, School of Physics, University of Sydney, Sydney, NSW, Australia
- CSIRO Health and Biosecurity, The Australian e-Health and Research Centre, Herston, QLD, Australia
- School of Mathematical and Physical Sciences, University of Newcastle, Newcastle, NSW, Australia
| | - David I Thwaites
- Institute of Medical Physics, School of Physics, University of Sydney, Sydney, NSW, Australia
- Radiotherapy Research Group, Leeds Institute of Medical Research, St James's Hospital and University of Leeds, Leeds, UK
| | - Shalini K Vinod
- Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia
- Liverpool Cancer Therapy Centre, South Western Sydney Local Health District, Liverpool, NSW, Australia
- South Western Sydney Clinical School, University of New South Wales, Sydney, NSW, Australia
| | - Geoff P Delaney
- Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia
- Liverpool Cancer Therapy Centre, South Western Sydney Local Health District, Liverpool, NSW, Australia
- South Western Sydney Clinical School, University of New South Wales, Sydney, NSW, Australia
| | - Lois Holloway
- Institute of Medical Physics, School of Physics, University of Sydney, Sydney, NSW, Australia
- Ingham Institute for Applied Medical Research, Liverpool, NSW, Australia
- Liverpool Cancer Therapy Centre, South Western Sydney Local Health District, Liverpool, NSW, Australia
- South Western Sydney Clinical School, University of New South Wales, Sydney, NSW, Australia
- Centre for Medical Radiation Physics, University of Wollongong, Wollongong, NSW, Australia
| |
Collapse
|
31
|
A Soft Label Method for Medical Image Segmentation with Multirater Annotations. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:1883597. [PMID: 36851939 PMCID: PMC9966563 DOI: 10.1155/2023/1883597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 10/04/2022] [Accepted: 10/06/2022] [Indexed: 02/20/2023]
Abstract
In medical image analysis, collecting multiple annotations from different clinical raters is a typical practice to mitigate possible diagnostic errors. For such multirater labels' learning problems, in addition to majority voting, it is a common practice to use soft labels in the form of full-probability distributions obtained by averaging raters as ground truth to train the model, which benefits from uncertainty contained in soft labels. However, the potential information contained in soft labels is rarely studied, which may be the key to improving the performance of medical image segmentation with multirater annotations. In this work, we aim to improve soft label methods by leveraging interpretable information from multiraters. Considering that mis-segmentation occurs in areas with weak supervision of annotations and high difficulty of images, we propose to reduce the reliance on local uncertain soft labels and increase the focus on image features. Therefore, we introduce local self-ensembling learning with consistency regularization, forcing the model to concentrate more on features rather than annotations, especially in regions with high uncertainty measured by the pixelwise interclass variance. Furthermore, we utilize a label smoothing technique to flatten each rater's annotation, alleviating overconfidence of structural edges in annotations. Without introducing additional parameters, our method improves the accuracy of the soft label baseline by 4.2% and 2.7% on a synthetic dataset and a fundus dataset, respectively. In addition, quantitative comparisons show that our method consistently outperforms existing multirater strategies as well as state-of-the-art methods. This work provides a simple yet effective solution for the widespread multirater label segmentation problems in clinical diagnosis.
Collapse
|
32
|
Bezanson S, Nichols ES, Duerden EG. Postnatal maternal distress, infant subcortical brain macrostructure and emotional regulation. Psychiatry Res Neuroimaging 2023; 328:111577. [PMID: 36512951 DOI: 10.1016/j.pscychresns.2022.111577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 09/16/2022] [Accepted: 11/22/2022] [Indexed: 12/12/2022]
Abstract
BACKGROUND Maternal distress is associated with an increased risk for adverse emotional development in infants, including difficulties with emotion regulation. Prenatal maternal distress has been associated with alterations in infant brain development. However, less is known about these associations with postnatal maternal distress, despite this being an important modifiable risk factor that can promote healthy brain development and emotional outcomes in infants. METHODS & RESULTS Infants underwent magnetic resonance imaging (MRI) and mothers completed standardized questionnaires concerning their levels of perceived distress 2-5 months postpartum. Infant emotion regulation was assessed at 8-11 months via maternal report. When examining the associations between maternal distress and infant macrostructure, maternal anxiety was associated with infant right pallidum volumes. Increased display of negative emotions at 8-11 months of age was associated with smaller hippocampal volumes and this association was stronger in girls than boys. CONCLUSION Findings suggest that postnatal maternal distress may be associated with early infant brain development and emphasize the importance of maternal mental health, supporting previous work. Furthermore, macrostructural properties of infant subcortical structures may be further investigated as potential biomarkers to identify infants at risk of adverse emotional outcomes.
Collapse
Affiliation(s)
- Samantha Bezanson
- Neuroscience Program, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada
| | - Emily S Nichols
- Applied Psychology, Faculty of Education, Western University, London, Ontario, Canada; Western Institute for Neuroscience, Western University, London, Ontario, Canada
| | - Emma G Duerden
- Neuroscience Program, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada; Applied Psychology, Faculty of Education, Western University, London, Ontario, Canada; Western Institute for Neuroscience, Western University, London, Ontario, Canada; Department of Psychiatry, Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada; Children's Health Research Institute, Western University, London, Ontario, Canada.
| |
Collapse
|
33
|
Walluscheck S, Canalini L, Strohm H, Diekmann S, Klein J, Heldmann S. MR-CT multi-atlas registration guided by fully automated brain structure segmentation with CNNs. Int J Comput Assist Radiol Surg 2023; 18:483-491. [PMID: 36334164 PMCID: PMC9939492 DOI: 10.1007/s11548-022-02786-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 10/25/2022] [Indexed: 11/08/2022]
Abstract
PURPOSE Computed tomography (CT) is widely used to identify anomalies in brain tissues because their localization is important for diagnosis and therapy planning. Due to the insufficient soft tissue contrast of CT, the division of the brain into anatomical meaningful regions is challenging and is commonly done with magnetic resonance imaging (MRI). METHODS We propose a multi-atlas registration approach to propagate anatomical information from a standard MRI brain atlas to CT scans. This translation will enable a detailed automated reporting of brain CT exams. We utilize masks of the lateral ventricles and the brain volume of CT images as adjuvant input to guide the registration process. Besides using manual annotations to test the registration in a first step, we then verify that convolutional neural networks (CNNs) are a reliable solution for automatically segmenting structures to enhance the registration process. RESULTS The registration method obtains mean Dice values of 0.92 and 0.99 in brain ventricles and parenchyma on 22 healthy test cases when using manually segmented structures as guidance. When guiding with automatically segmented structures, the mean Dice values are 0.87 and 0.98, respectively. CONCLUSION Our registration approach is a fully automated solution to register MRI atlas images to CT scans and thus obtain detailed anatomical information. The proposed CNN segmentation method can be used to obtain masks of ventricles and brain volume which guide the registration.
Collapse
Affiliation(s)
- Sina Walluscheck
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
| | - Luca Canalini
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Hannah Strohm
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Susanne Diekmann
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Jan Klein
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Stefan Heldmann
- grid.428590.20000 0004 0496 8246Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| |
Collapse
|
34
|
Bustamante M, Viola F, Engvall J, Carlhäll C, Ebbers T. Automatic Time-Resolved Cardiovascular Segmentation of 4D Flow MRI Using Deep Learning. J Magn Reson Imaging 2023; 57:191-203. [PMID: 35506525 PMCID: PMC10946960 DOI: 10.1002/jmri.28221] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 04/14/2022] [Accepted: 04/15/2022] [Indexed: 02/03/2023] Open
Abstract
BACKGROUND Segmenting the whole heart over the cardiac cycle in 4D flow MRI is a challenging and time-consuming process, as there is considerable motion and limited contrast between blood and tissue. PURPOSE To develop and evaluate a deep learning-based segmentation method to automatically segment the cardiac chambers and great thoracic vessels from 4D flow MRI. STUDY TYPE Retrospective. SUBJECTS A total of 205 subjects, including 40 healthy volunteers and 165 patients with a variety of cardiac disorders were included. Data were randomly divided into training (n = 144), validation (n = 20), and testing (n = 41) sets. FIELD STRENGTH/SEQUENCE A 3 T/time-resolved velocity encoded 3D gradient echo sequence (4D flow MRI). ASSESSMENT A 3D neural network based on the U-net architecture was trained to segment the four cardiac chambers, aorta, and pulmonary artery. The segmentations generated were compared to manually corrected atlas-based segmentations. End-diastolic (ED) and end-systolic (ES) volumes of the four cardiac chambers were calculated for both segmentations. STATISTICAL TESTS Dice score, Hausdorff distance, average surface distance, sensitivity, precision, and miss rate were used to measure segmentation accuracy. Bland-Altman analysis was used to evaluate agreement between volumetric parameters. RESULTS The following evaluation metrics were computed: mean Dice score (0.908 ± 0.023) (mean ± SD), Hausdorff distance (1.253 ± 0.293 mm), average surface distance (0.466 ± 0.136 mm), sensitivity (0.907 ± 0.032), precision (0.913 ± 0.028), and miss rate (0.093 ± 0.032). Bland-Altman analyses showed good agreement between volumetric parameters for all chambers. Limits of agreement as percentage of mean chamber volume (LoA%), left ventricular: 9.3%, 13.5%, left atrial: 12.4%, 16.9%, right ventricular: 9.9%, 15.6%, and right atrial: 18.7%, 14.4%; for ED and ES, respectively. DATA CONCLUSION The addition of this technique to the 4D flow MRI assessment pipeline could expedite and improve the utility of this type of acquisition in the clinical setting. EVIDENCE LEVEL 4 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Mariana Bustamante
- Division of Diagnostics and Specialist Medicine, Department of Health, Medicine and Caring SciencesLinköping UniversityLinköpingSweden
- Center for Medical Image Science and Visualization (CMIV)Linköping UniversityLinköpingSweden
| | - Federica Viola
- Division of Diagnostics and Specialist Medicine, Department of Health, Medicine and Caring SciencesLinköping UniversityLinköpingSweden
| | - Jan Engvall
- Division of Diagnostics and Specialist Medicine, Department of Health, Medicine and Caring SciencesLinköping UniversityLinköpingSweden
- Department of Clinical Physiology in Linköping, Department of Health, Medicine and Caring SciencesLinköping UniversityLinköpingSweden
| | - Carl‐Johan Carlhäll
- Division of Diagnostics and Specialist Medicine, Department of Health, Medicine and Caring SciencesLinköping UniversityLinköpingSweden
- Center for Medical Image Science and Visualization (CMIV)Linköping UniversityLinköpingSweden
- Department of Clinical Physiology in Linköping, Department of Health, Medicine and Caring SciencesLinköping UniversityLinköpingSweden
| | - Tino Ebbers
- Division of Diagnostics and Specialist Medicine, Department of Health, Medicine and Caring SciencesLinköping UniversityLinköpingSweden
- Center for Medical Image Science and Visualization (CMIV)Linköping UniversityLinköpingSweden
| |
Collapse
|
35
|
VilasBoas-Ribeiro I, Franckena M, van Rhoon GC, Hernández-Tamames JA, Paulides MM. Using MRI to measure position and anatomy changes and assess their impact on the accuracy of hyperthermia treatment planning for cervical cancer. Int J Hyperthermia 2022; 40:2151648. [PMID: 36535922 DOI: 10.1080/02656736.2022.2151648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
PURPOSE We studied the differences between planning and treatment position, their impact on the accuracy of hyperthermia treatment planning (HTP) predictions, and the relevance of including true treatment anatomy and position in HTP based on magnetic resonance (MR) images. MATERIALS AND METHODS All volunteers were scanned with an MR-compatible hyperthermia device, including a filled waterbolus, to replicate the treatment setup. In the planning setup, the volunteers were scanned without the device to reproduce the imaging in the current HTP. First, we used rigid registration to investigate the patient position displacements between the planning and treatment setup. Second, we performed HTP for the planning anatomy at both positions and the treatment mimicking anatomy to study the effects of positioning and anatomy on the quality of the simulated hyperthermia treatment. Treatment quality was evaluated using SAR-based parameters. RESULTS We found an average displacement of 2 cm between planning and treatment positions. These displacements caused average absolute differences of ∼12% for TC25 and 10.4%-15.9% in THQ. Furthermore, we found that including the accurate treatment position and anatomy in treatment planning led to an improvement of 2% in TC25 and 4.6%-10.6% in THQ. CONCLUSIONS This study showed that precise patient position and anatomy are relevant since these affect the accuracy of HTP predictions. The major part of improved accuracy is related to implementing the correct position of the patient in the applicator. Hence, our study shows a clear incentive to accurately match the patient position in HTP with the actual treatment.
Collapse
Affiliation(s)
- Iva VilasBoas-Ribeiro
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Martine Franckena
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Gerard C van Rhoon
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, The Netherlands.,Department of Applied Radiation and Isotopes, Reactor Institute Delft, Delft University of Technology, Delft, The Netherlands
| | - Juan A Hernández-Tamames
- Department of Radiology and Nuclear Medicine, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Margarethus M Paulides
- Department of Radiotherapy, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, The Netherlands.,Care and Cure research lab (EM-4C&C) of the Electromagnetics Group, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
36
|
Zhang C, Porto A, Rolfe S, Kocatulum A, Maga AM. Automated landmarking via multiple templates. PLoS One 2022; 17:e0278035. [PMID: 36454982 PMCID: PMC9714854 DOI: 10.1371/journal.pone.0278035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 11/08/2022] [Indexed: 12/02/2022] Open
Abstract
Manually collecting landmarks for quantifying complex morphological phenotypes can be laborious and subject to intra and interobserver errors. However, most automated landmarking methods for efficiency and consistency fall short of landmarking highly variable samples due to the bias introduced by the use of a single template. We introduce a fast and open source automated landmarking pipeline (MALPACA) that utilizes multiple templates for accommodating large-scale variations. We also introduce a K-means method of choosing the templates that can be used in conjunction with MALPACA, when no prior information for selecting templates is available. Our results confirm that MALPACA significantly outperforms single-template methods in landmarking both single and multi-species samples. K-means based template selection can also avoid choosing the worst set of templates when compared to random template selection. We further offer an example of post-hoc quality check for each individual template for further refinement. In summary, MALPACA is an efficient and reproducible method that can accommodate large morphological variability, such as those commonly found in evolutionary studies. To support the research community, we have developed open-source and user-friendly software tools for performing K-means multi-templates selection and MALPACA.
Collapse
Affiliation(s)
- Chi Zhang
- Center for Development Biology and Regenerative Medicine, Seattle Children’s Research Institute, Seattle, Washington, United States of America
| | - Arthur Porto
- Department of Biological Sciences, Louisiana State University, Baton Rouge, Louisiana, United States of America
- Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana, United States of America
| | - Sara Rolfe
- Center for Development Biology and Regenerative Medicine, Seattle Children’s Research Institute, Seattle, Washington, United States of America
- Friday Harbor Laboratories, University of Washington, San Juan Island, Washington, United States of America
| | - Altan Kocatulum
- Alfred University, Alfred, New York, United States of America
| | - A. Murat Maga
- Center for Development Biology and Regenerative Medicine, Seattle Children’s Research Institute, Seattle, Washington, United States of America
- Division of Craniofacial Medicine, Department of Pediatrics, University of Washington, Seattle, Washington, United States of America
- * E-mail:
| |
Collapse
|
37
|
Chadoulos CG, Tsaopoulos DE, Moustakidis S, Tsakiridis NL, Theocharis JB. A novel multi-atlas segmentation approach under the semi-supervised learning framework: Application to knee cartilage segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 227:107208. [PMID: 36384059 DOI: 10.1016/j.cmpb.2022.107208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 10/19/2022] [Accepted: 10/27/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE Multi-atlas based segmentation techniques, which rely on an atlas library comprised of training images labeled by an expert, have proven their effectiveness in multiple automatic segmentation applications. However, the usage of exhaustive patch libraries combined with the voxel-wise labeling incur a large computational cost in terms of memory requirements and execution times. METHODS To confront this shortcoming, we propose a novel two-stage multi-atlas approach designed under the Semi-Supervised Learning (SSL) framework. The main properties of our method are as follows: First, instead of the voxel-wise labeling approach, the labeling of target voxels is accomplished here by exploiting the spectral content of globally sampled datasets from the target image, along with their spatially correspondent data collected from the atlases. Following SSL, voxels classification is boosted by incorporating unlabeled data from the target image, in addition to the labeled ones from atlas library. Our scheme integrates constructively fruitful concepts, including sparse reconstructions of voxels from linear neighborhoods, HOG feature descriptors of patches/regions, and label propagation via sparse graph constructions. Segmentation of the target image is carried out in two stages: stage-1 focuses on the sampling and labeling of global data, while stage-2 undertakes the above tasks for the out-of-sample data. Finally, we propose different graph-based methods for the labeling of global data, while these methods are extended to deal with the out-of-sample voxels. RESULTS A thorough experimental investigation is conducted on 76 subjects provided by the publicly accessible Osteoarthritis Initiative (OAI) repository. Comparative results and statistical analysis demonstrate that the suggested methodology exhibits superior segmentation performance compared to the existing patch-based methods, across all evaluation metrics (DSC:88.89%, Precision: 89.86%, Recall: 88.12%), while at the same time it requires a considerably reduced computational load (>70% reduction on average execution time with respect to other patch-based). In addition, our approach is favorably compared against other non patch-based and deep learning methods in terms of performance accuracy (on the 3-class problem). A final experimentation on a 5-class setting of the problems demonstrates that our approach is capable of achieving performance comparable to existing state-of-the-art knee cartilage segmentation methods (DSC:88.22% and DSC:85.84% for femoral and tibial cartilage respectively).
Collapse
Affiliation(s)
- Christos G Chadoulos
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, 54124, Greece.
| | - Dimitrios E Tsaopoulos
- Institute for Bio-Economy and Agri-Technology, Centre for Research and Technology Hellas, Volos, 38333, Greece.
| | | | - Nikolaos L Tsakiridis
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, 54124, Greece.
| | - John B Theocharis
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, 54124, Greece.
| |
Collapse
|
38
|
Barzegar Z, Jamzad M. An Efficient Optimization Approach for Glioma Tumor Segmentation in Brain MRI. J Digit Imaging 2022; 35:1634-1647. [PMID: 35995900 PMCID: PMC9712883 DOI: 10.1007/s10278-022-00655-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 04/22/2022] [Accepted: 05/06/2022] [Indexed: 11/29/2022] Open
Abstract
Glioma is an aggressive type of cancer that develops in the brain or spinal cord. Due to many differences in its shape and appearance, accurate segmentation of glioma for identifying all parts of the tumor and its surrounding cancerous tissues is a challenging task. In recent researches, the combination of multi-atlas segmentation and machine learning methods provides robust and accurate results by learning from annotated atlas datasets. To overcome the side effects of limited existing information on atlas-based segmentation, and the long training phase of learning methods, we proposed a semi-supervised unified framework for multi-label segmentation that formulates this problem in terms of a Markov Random Field energy optimization on a parametric graph. To evaluate the proposed framework, we apply it to publicly available BRATS datasets, including low- and high-grade glioma tumors. Experimental results indicate competitive performance compared to the state-of-the-art methods. Compared with the top ranked methods, the proposed framework obtains the best dice score for segmenting of "whole tumor" (WT), "tumor core" (TC ) and "enhancing active tumor" (ET) regions. The achieved accuracy is 94[Formula: see text] characterized by the mean dice score. The motivation of using MRF graph is to map the segmentation problem to an optimization model in a graphical environment. Therefore, by defining perfect graph structure and optimum constraints and flows in the continuous max-flow model, the segmentation is performed precisely.
Collapse
Affiliation(s)
- Zeynab Barzegar
- Present Address: Sharif University of Technology, Tehran, Iran
| | - Mansour Jamzad
- Present Address: Sharif University of Technology, Tehran, Iran
| |
Collapse
|
39
|
Leary D, Basran PS. The role of artificial intelligence in veterinary radiation oncology. Vet Radiol Ultrasound 2022; 63 Suppl 1:903-912. [PMID: 36514233 DOI: 10.1111/vru.13162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 01/21/2022] [Accepted: 04/12/2022] [Indexed: 12/15/2022] Open
Abstract
Veterinary radiation oncology regularly deploys sophisticated contouring, image registration, and treatment planning optimization software for patient care. Over the past decade, advances in computing power and the rapid development of neural networks, open-source software packages, and data science have been realized and resulted in new research and clinical applications of artificial intelligent (AI) systems in radiation oncology. These technologies differ from conventional software in their level of complexity and ability to learn from representative and local data. We provide clinical and research application examples of AI in human radiation oncology and their potential applications in veterinary medicine throughout the patient's care-path: from treatment simulation, deformable registration, auto-segmentation, automated treatment planning and plan selection, quality assurance, adaptive radiotherapy, and outcomes modeling. These technologies have the potential to offer significant time and cost savings in the veterinary setting; however, since the range of usefulness of these technologies have not been well studied nor understood, care must be taken if adopting AI technologies in clinical practice. Over the next several years, some practical and realizable applications of AI in veterinary radiation oncology include automated segmentation of normal tissues and tumor volumes, deformable registration, multi-criteria plan optimization, and adaptive radiotherapy. Keys in achieving success in adopting AI in veterinary radiation oncology include: establishing "truth-data"; data harmonization; multi-institutional data and collaborations; standardized dose reporting and taxonomy; adopting an open access philosophy, data collection and curation; open-source algorithm development; and transparent and platform-independent code development.
Collapse
Affiliation(s)
- Del Leary
- Department of Environment and Radiological Health Sciences, College of Veterinary Medicine and Biomedical Sciences, Colorado State University, Fort Collins, Colorado, USA
| | - Parminder S Basran
- Department of Clinical Sciences, College of Veterinary Medicine, Cornell University, Ithaca, New York, USA
| |
Collapse
|
40
|
Ren M, Dey N, Styner MA, Botteron KN, Gerig G. Local Spatiotemporal Representation Learning for Longitudinally-consistent Neuroimage Analysis. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 2022; 35:13541-13556. [PMID: 37614415 PMCID: PMC10445502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
Recent self-supervised advances in medical computer vision exploit the global and local anatomical self-similarity for pretraining prior to downstream tasks such as segmentation. However, current methods assume i.i.d. image acquisition, which is invalid in clinical study designs where follow-up longitudinal scans track subject-specific temporal changes. Further, existing self-supervised methods for medically-relevant image-to-image architectures exploit only spatial or temporal self-similarity and do so via a loss applied only at a single image-scale, with naive multi-scale spatiotemporal extensions collapsing to degenerate solutions. To these ends, this paper makes two contributions: (1) It presents a local and multi-scale spatiotemporal representation learning method for image-to-image architectures trained on longitudinal images. It exploits the spatiotemporal self-similarity of learned multi-scale intra-subject image features for pretraining and develops several feature-wise regularizations that avoid degenerate representations; (2) During finetuning, it proposes a surprisingly simple self-supervised segmentation consistency regularization to exploit intra-subject correlation. Benchmarked across various segmentation tasks, the proposed framework outperforms both well-tuned randomly-initialized baselines and current self-supervised techniques designed for both i.i.d. and longitudinal datasets. These improvements are demonstrated across both longitudinal neurodegenerative adult MRI and developing infant brain MRI and yield both higher performance and longitudinal consistency.
Collapse
|
41
|
Huang K, Huang S, Chen G, Li X, Li S, Liang Y, Gao Y. An end-to-end multi-task system of automatic lesion detection and anatomical localization in whole-body bone scintigraphy by deep learning. Bioinformatics 2022; 39:6842323. [PMID: 36416135 PMCID: PMC9805554 DOI: 10.1093/bioinformatics/btac753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 10/25/2022] [Accepted: 11/22/2022] [Indexed: 11/24/2022] Open
Abstract
SUMMARY Limited by spatial resolution and visual contrast, bone scintigraphy interpretation is susceptible to subjective factors, which considerably affects the accuracy and repeatability of lesion detection and anatomical localization. In this work, we design and implement an end-to-end multi-task deep learning model to perform automatic lesion detection and anatomical localization in whole-body bone scintigraphy. A total of 617 whole-body bone scintigraphy cases including anterior and posterior views were retrospectively analyzed. The proposed semi-supervised model consists of two task flows. The first one, the lesion segmentation flow, received image patches and was trained in a supervised way. The other one, skeleton segmentation flow, was trained on as few as five labeled images in conjunction with the multi-atlas approach, in a semi-supervised way. The two flows joint in their encoder layers so each flow can capture more generalized distribution of the sample space and extract more abstract deep features. The experimental results show that the architecture achieved the highest precision in the finest bone segmentation task in both anterior and posterior images of whole-body scintigraphy. Such an end-to-end approach with very few manual annotation requirement would be suitable for algorithm deployment. Moreover, the proposed approach reliably balances unsupervised labels construction and supervised learning, providing useful insight for weakly labeled image analysis. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
| | | | - Guojing Chen
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518037, China
| | - Xue Li
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518037, China
| | - Shawn Li
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518037, China
| | - Ying Liang
- To whom correspondence should be addressed. or
| | - Yi Gao
- To whom correspondence should be addressed. or
| |
Collapse
|
42
|
Jönsson H, Ekström S, Strand R, Pedersen MA, Molin D, Ahlström H, Kullberg J. An image registration method for voxel-wise analysis of whole-body oncological PET-CT. Sci Rep 2022; 12:18768. [PMID: 36335130 PMCID: PMC9637131 DOI: 10.1038/s41598-022-23361-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 10/31/2022] [Indexed: 11/08/2022] Open
Abstract
Whole-body positron emission tomography-computed tomography (PET-CT) imaging in oncology provides comprehensive information of each patient's disease status. However, image interpretation of volumetric data is a complex and time-consuming task. In this work, an image registration method targeted towards computer-aided voxel-wise analysis of whole-body PET-CT data was developed. The method used both CT images and tissue segmentation masks in parallel to spatially align images step-by-step. To evaluate its performance, a set of baseline PET-CT images of 131 classical Hodgkin lymphoma (cHL) patients and longitudinal image series of 135 head and neck cancer (HNC) patients were registered between and within subjects according to the proposed method. Results showed that major organs and anatomical structures generally were registered correctly. Whole-body inverse consistency vector and intensity magnitude errors were on average less than 5 mm and 45 Hounsfield units respectively in both registration tasks. Image registration was feasible in time and the nearly automatic pipeline enabled efficient image processing. Metabolic tumor volumes of the cHL patients and registration-derived therapy-related tissue volume change of the HNC patients mapped to template spaces confirmed proof-of-concept. In conclusion, the method established a robust point-correspondence and enabled quantitative visualization of group-wise image features on voxel level.
Collapse
Affiliation(s)
- Hanna Jönsson
- grid.8993.b0000 0004 1936 9457Section of Radiology, Department of Surgical Sciences, Uppsala University, 751 85 Uppsala, Sweden
| | - Simon Ekström
- grid.8993.b0000 0004 1936 9457Section of Radiology, Department of Surgical Sciences, Uppsala University, 751 85 Uppsala, Sweden
| | - Robin Strand
- grid.8993.b0000 0004 1936 9457Section of Radiology, Department of Surgical Sciences, Uppsala University, 751 85 Uppsala, Sweden ,grid.8993.b0000 0004 1936 9457Department of Information Technology, Uppsala University, 751 05 Uppsala, Sweden
| | - Mette A. Pedersen
- grid.154185.c0000 0004 0512 597XDepartment of Nuclear Medicine & PET-Centre, Aarhus University Hospital, 8200 Aarhus N, Denmark
| | - Daniel Molin
- grid.8993.b0000 0004 1936 9457Department of Immunology, Genetics and Pathology, Uppsala University, 751 85 Uppsala, Sweden
| | - Håkan Ahlström
- grid.8993.b0000 0004 1936 9457Section of Radiology, Department of Surgical Sciences, Uppsala University, 751 85 Uppsala, Sweden ,grid.511796.dAntaros Medical AB, BioVenture Hub, 431 53 Mölndal, Sweden
| | - Joel Kullberg
- grid.8993.b0000 0004 1936 9457Section of Radiology, Department of Surgical Sciences, Uppsala University, 751 85 Uppsala, Sweden ,grid.511796.dAntaros Medical AB, BioVenture Hub, 431 53 Mölndal, Sweden
| |
Collapse
|
43
|
Zhu F, Wang S, Li D, Li Q. Similarity attention-based CNN for robust 3D medical image registration. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.104403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
44
|
Kihara S, Koike Y, Takegawa H, Anetai Y, Nakamura S, Tanigawa N, Koizumi M. Clinical target volume segmentation based on gross tumor volume using deep learning for head and neck cancer treatment. Med Dosim 2022; 48:20-24. [PMID: 36273950 DOI: 10.1016/j.meddos.2022.09.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 02/07/2022] [Accepted: 09/17/2022] [Indexed: 02/04/2023]
Abstract
Accurate clinical target volume (CTV) delineation is important for head and neck intensity-modulated radiation therapy. However, delineation is time-consuming and susceptible to interobserver variability (IOV). Based on a manual contouring process commonly used in clinical practice, we developed a deep learning (DL)-based method to delineate a low-risk CTV with computed tomography (CT) and gross tumor volume (GTV) input and compared it with a CT-only input. A total of 310 patients with oropharynx cancer were randomly divided into the training set (250) and test set (60). The low-risk CTV and primary GTV contours were used to generate label data for the input and ground truth. A 3D U-Net with a two-channel input of CT and GTV (U-NetGTV) was proposed and its performance was compared with a U-Net with only CT input (U-NetCT). The Dice similarity coefficient (DSC) and average Hausdorff distance (AHD) were evaluated. The time required to predict the CTV was 0.86 s per patient. U-NetGTV showed a significantly higher mean DSC value than U-NetCT (0.80 ± 0.03 and 0.76 ± 0.05) and a significantly lower mean AHD value (3.0 ± 0.5 mm vs 3.5 ± 0.7 mm). Compared to the existing DL method with only CT input, the proposed GTV-based segmentation using DL showed a more precise low-risk CTV segmentation for head and neck cancer. Our findings suggest that the proposed method could reduce the contouring time of a low-risk CTV, allowing the standardization of target delineations for head and neck cancer.
Collapse
Affiliation(s)
- Sayaka Kihara
- Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Yuhei Koike
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan.
| | - Hideki Takegawa
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Yusuke Anetai
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Satoaki Nakamura
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Noboru Tanigawa
- Department of Radiology, Kansai Medical University, 2-5-1 Shinmachi, Hirakata, Osaka, 573-1010, Japan
| | - Masahiko Koizumi
- Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, 1-7 Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
45
|
Schevenels K, Michiels L, Lemmens R, De Smedt B, Zink I, Vandermosten M. The role of the hippocampus in statistical learning and language recovery in persons with post stroke aphasia. Neuroimage Clin 2022; 36:103243. [PMID: 36306718 PMCID: PMC9668653 DOI: 10.1016/j.nicl.2022.103243] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 10/17/2022] [Accepted: 10/19/2022] [Indexed: 11/11/2022]
Abstract
Although several studies have aimed for accurate predictions of language recovery in post stroke aphasia, individual language outcomes remain hard to predict. Large-scale prediction models are built using data from patients mainly in the chronic phase after stroke, although it is clinically more relevant to consider data from the acute phase. Previous research has mainly focused on deficits, i.e., behavioral deficits or specific brain damage, rather than compensatory mechanisms, i.e., intact cognitive skills or undamaged brain regions. One such unexplored brain region that might support language (re)learning in aphasia is the hippocampus, a region that has commonly been associated with an individual's learning potential, including statistical learning. This refers to a set of mechanisms upon which we rely heavily in daily life to learn a range of regularities across cognitive domains. Against this background, thirty-three patients with aphasia (22 males and 11 females, M = 69.76 years, SD = 10.57 years) were followed for 1 year in the acute (1-2 weeks), subacute (3-6 months) and chronic phase (9-12 months) post stroke. We evaluated the unique predictive value of early structural hippocampal measures for short-term and long-term language outcomes (measured by the ANELT). In addition, we investigated whether statistical learning abilities were intact in patients with aphasia using three different tasks: an auditory-linguistic and visual task based on the computation of transitional probabilities and a visuomotor serial reaction time task. Finally, we examined the association of individuals' statistical learning potential with acute measures of hippocampal gray and white matter. Using Bayesian statistics, we found moderate evidence for the contribution of left hippocampal gray matter in the acute phase to the prediction of long-term language outcomes, over and above information on the lesion and the initial language deficit (measured by the ScreeLing). Non-linguistic statistical learning in patients with aphasia, measured in the subacute phase, was intact at the group level compared to 23 healthy older controls (8 males and 15 females, M = 74.09 years, SD = 6.76 years). Visuomotor statistical learning correlated with acute hippocampal gray and white matter. These findings reveal that particularly left hippocampal gray matter in the acute phase is a potential marker of language recovery after stroke, possibly through its statistical learning ability.
Collapse
Affiliation(s)
- Klara Schevenels
- Research Group Experimental Oto-Rhino-Laryngology, Department of Neurosciences, KU Leuven, Onderwijs en Navorsing 2 (O&N2), Herestraat 49 box 721, Leuven 3000, Belgium; Leuven Brain Institute, KU Leuven, Onderwijs en Navorsing 5 (O&N 5), Herestraat 49 box 1020, Leuven 3000, Belgium.
| | - Laura Michiels
- Department of Neurology, University Hospitals Leuven, Herestraat 49, Leuven 3000, Belgium; Research Group Experimental Neurology, Department of Neurosciences, KU Leuven, Herestraat 49 box 7003, Leuven 3000, Belgium; Laboratory of Neurobiology, VIB Center for Brain & Disease Research, Onderwijs en Navorsing 5 (O&N 5), Herestraat 49 box 602, Leuven 3000, Belgium; Leuven Brain Institute, KU Leuven, Onderwijs en Navorsing 5 (O&N 5), Herestraat 49 box 1020, Leuven 3000, Belgium.
| | - Robin Lemmens
- Department of Neurology, University Hospitals Leuven, Herestraat 49, Leuven 3000, Belgium; Research Group Experimental Neurology, Department of Neurosciences, KU Leuven, Herestraat 49 box 7003, Leuven 3000, Belgium; Laboratory of Neurobiology, VIB Center for Brain & Disease Research, Onderwijs en Navorsing 5 (O&N 5), Herestraat 49 box 602, Leuven 3000, Belgium; Leuven Brain Institute, KU Leuven, Onderwijs en Navorsing 5 (O&N 5), Herestraat 49 box 1020, Leuven 3000, Belgium.
| | - Bert De Smedt
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU leuven, Leopold Vanderkelenstraat 32 box 3765, Leuven 3000, Belgium; Leuven Brain Institute, KU Leuven, Onderwijs en Navorsing 5 (O&N 5), Herestraat 49 box 1020, Leuven 3000, Belgium.
| | - Inge Zink
- Research Group Experimental Oto-Rhino-Laryngology, Department of Neurosciences, KU Leuven, Onderwijs en Navorsing 2 (O&N2), Herestraat 49 box 721, Leuven 3000, Belgium; Leuven Brain Institute, KU Leuven, Onderwijs en Navorsing 5 (O&N 5), Herestraat 49 box 1020, Leuven 3000, Belgium.
| | - Maaike Vandermosten
- Research Group Experimental Oto-Rhino-Laryngology, Department of Neurosciences, KU Leuven, Onderwijs en Navorsing 2 (O&N2), Herestraat 49 box 721, Leuven 3000, Belgium; Leuven Brain Institute, KU Leuven, Onderwijs en Navorsing 5 (O&N 5), Herestraat 49 box 1020, Leuven 3000, Belgium.
| |
Collapse
|
46
|
Makrogiannis S, Okorie A, Di Iorio A, Bandinelli S, Ferrucci L. Multi-atlas segmentation and quantification of muscle, bone and subcutaneous adipose tissue in the lower leg using peripheral quantitative computed tomography. Front Physiol 2022; 13:951368. [PMID: 36311235 PMCID: PMC9614313 DOI: 10.3389/fphys.2022.951368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 09/26/2022] [Indexed: 11/26/2022] Open
Abstract
Accurate and reproducible tissue identification is essential for understanding structural and functional changes that may occur naturally with aging, or because of a chronic disease, or in response to intervention therapies. Peripheral quantitative computed tomography (pQCT) is regularly employed for body composition studies, especially for the structural and material properties of the bone. Furthermore, pQCT acquisition requires low radiation dose and the scanner is compact and portable. However, pQCT scans have limited spatial resolution and moderate SNR. pQCT image quality is frequently degraded by involuntary subject movement during image acquisition. These limitations may often compromise the accuracy of tissue quantification, and emphasize the need for automated and robust quantification methods. We propose a tissue identification and quantification methodology that addresses image quality limitations and artifacts, with increased interest in subject movement. We introduce a multi-atlas image segmentation (MAIS) framework for semantic segmentation of hard and soft tissues in pQCT scans at multiple levels of the lower leg. We describe the stages of statistical atlas generation, deformable registration and multi-tissue classifier fusion. We evaluated the performance of our methodology using multiple deformable registration approaches against reference tissue masks. We also evaluated the performance of conventional model-based segmentation against the same reference data to facilitate comparisons. We studied the effect of subject movement on tissue segmentation quality. We also applied the top performing method to a larger out-of-sample dataset and report the quantification results. The results show that multi-atlas image segmentation with diffeomorphic deformation and probabilistic label fusion produces very good quality over all tissues, even for scans with significant quality degradation. The application of our technique to the larger dataset reveals trends of age-related body composition changes that are consistent with the literature. Because of its robustness to subject motion artifacts, our MAIS methodology enables analysis of larger number of scans than conventional state-of-the-art methods. Automated analysis of both soft and hard tissues in pQCT is another contribution of this work.
Collapse
Affiliation(s)
- Sokratis Makrogiannis
- Math Imaging and Visual Computing Lab, Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, DE, United States
- *Correspondence: Sokratis Makrogiannis,
| | - Azubuike Okorie
- Math Imaging and Visual Computing Lab, Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, DE, United States
| | - Angelo Di Iorio
- Antalgic Mini-invasive and Rehab-Outpatients Unit, Department of Innovative Technologies in Medicine & Dentistry, University “G.d’Annunzio”, Chieti-Pescara, Italy
| | | | - Luigi Ferrucci
- National Institute on Aging, National Institutes of Health, Baltimore, MD, United States
| |
Collapse
|
47
|
Ma J, Zhang Y, Gu S, Zhu C, Ge C, Zhang Y, An X, Wang C, Wang Q, Liu X, Cao S, Zhang Q, Liu S, Wang Y, Li Y, He J, Yang X. AbdomenCT-1K: Is Abdominal Organ Segmentation a Solved Problem? IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:6695-6714. [PMID: 34314356 DOI: 10.1109/tpami.2021.3100536] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
With the unprecedented developments in deep learning, automatic segmentation of main abdominal organs seems to be a solved problem as state-of-the-art (SOTA) methods have achieved comparable results with inter-rater variability on many benchmark datasets. However, most of the existing abdominal datasets only contain single-center, single-phase, single-vendor, or single-disease cases, and it is unclear whether the excellent performance can generalize on diverse datasets. This paper presents a large and diverse abdominal CT organ segmentation dataset, termed AbdomenCT-1K, with more than 1000 (1K) CT scans from 12 medical centers, including multi-phase, multi-vendor, and multi-disease cases. Furthermore, we conduct a large-scale study for liver, kidney, spleen, and pancreas segmentation and reveal the unsolved segmentation problems of the SOTA methods, such as the limited generalization ability on distinct medical centers, phases, and unseen diseases. To advance the unsolved problems, we further build four organ segmentation benchmarks for fully supervised, semi-supervised, weakly supervised, and continual learning, which are currently challenging and active research topics. Accordingly, we develop a simple and effective method for each benchmark, which can be used as out-of-the-box methods and strong baselines. We believe the AbdomenCT-1K dataset will promote future in-depth research towards clinical applicable abdominal organ segmentation methods.
Collapse
|
48
|
Casamitjana A, Iglesias JE. High-resolution atlasing and segmentation of the subcortex: Review and perspective on challenges and opportunities created by machine learning. Neuroimage 2022; 263:119616. [PMID: 36084858 DOI: 10.1016/j.neuroimage.2022.119616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 08/30/2022] [Accepted: 09/05/2022] [Indexed: 11/17/2022] Open
Abstract
This paper reviews almost three decades of work on atlasing and segmentation methods for subcortical structures in human brain MRI. In writing this survey, we have three distinct aims. First, to document the evolution of digital subcortical atlases of the human brain, from the early MRI templates published in the nineties, to the complex multi-modal atlases at the subregion level that are available today. Second, to provide a detailed record of related efforts in the automated segmentation front, from earlier atlas-based methods to modern machine learning approaches. And third, to present a perspective on the future of high-resolution atlasing and segmentation of subcortical structures in in vivo human brain MRI, including open challenges and opportunities created by recent developments in machine learning.
Collapse
Affiliation(s)
- Adrià Casamitjana
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK.
| | - Juan Eugenio Iglesias
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK; Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, USA
| |
Collapse
|
49
|
Deep learning models and traditional automated techniques for brain tumor segmentation in MRI: a review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10245-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
50
|
Advances and Innovations in Ablative Head and Neck Oncologic Surgery Using Mixed Reality Technologies in Personalized Medicine. J Clin Med 2022; 11:jcm11164767. [PMID: 36013006 PMCID: PMC9410374 DOI: 10.3390/jcm11164767] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 08/10/2022] [Accepted: 08/12/2022] [Indexed: 11/17/2022] Open
Abstract
The benefit of computer-assisted planning in head and neck ablative and reconstructive surgery has been extensively documented over the last decade. This approach has been proven to offer a more secure surgical procedure. In the treatment of cancer of the head and neck, computer-assisted surgery can be used to visualize and estimate the location and extent of the tumor mass. Nowadays, some software tools even allow the visualization of the structures of interest in a mixed reality environment. However, the precise integration of mixed reality systems into a daily clinical routine is still a challenge. To date, this technology is not yet fully integrated into clinical settings such as the tumor board, surgical planning for head and neck tumors, or medical and surgical education. As a consequence, the handling of these systems is still of an experimental nature, and decision-making based on the presented data is not yet widely used. The aim of this paper is to present a novel, user-friendly 3D planning and mixed reality software and its potential application for ablative and reconstructive head and neck surgery.
Collapse
|