1
|
Tian Y, Liang Y, Chen Y, Zhang J, Bian H. Multilevel support-assisted prototype optimization network for few-shot medical segmentation of lung lesions. Sci Rep 2025; 15:3290. [PMID: 39865124 PMCID: PMC11770124 DOI: 10.1038/s41598-025-87829-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2024] [Accepted: 01/22/2025] [Indexed: 01/28/2025] Open
Abstract
Medical image annotation is scarce and costly. Few-shot segmentation has been widely used in medical image from only a few annotated examples. However, its research on lesion segmentation for lung diseases is still limited, especially for pulmonary aspergillosis. Lesion areas usually have complex shapes and blurred edges. Lesion segmentation requires more attention to deal with the diversity and uncertainty of lesions. To address this challenge, we propose MSPO-Net, a multilevel support-assisted prototype optimization network designed for few-shot lesion segmentation in computerized tomography (CT) images of lung diseases. MSPO-Net learns lesion prototypes from low-level to high-level features. Self-attention threshold learning strategy can focus on the global information and obtain an optimal threshold for CT images. Our model refines prototypes through a support-assisted prototype optimization module, adaptively enhancing their representativeness for the diversity of lesions and adapting to the unseen lesions better. In clinical examinations, CT is more practical than X-rays. To ensure the quality of our work, we have established a small-scale CT image dataset for three lung diseases and annotated by experienced doctors. Experiments demonstrate that MSPO-Net can improve segmentation performance and robustness of lung disease lesion. MSPO-Net achieves state-of-the-art performance in both single and unseen lung disease segmentation, indicating its potentiality to reduce doctors' workload and improve diagnostic accuracy. This research has certain clinical significance. Code is available at https://github.com/Tian-Yuan-ty/MSPO-Net .
Collapse
Affiliation(s)
- Yuan Tian
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China
| | - Yongquan Liang
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, Shandong, China.
| | - Yufeng Chen
- Shandong Provincial Public Health Clinical Center, Shandong University, Jinan, 250013, Shandong, China
| | - Jingjing Zhang
- Shandong Provincial Public Health Clinical Center, Shandong University, Jinan, 250013, Shandong, China
| | - Hongyang Bian
- Shandong Provincial Public Health Clinical Center, Shandong University, Jinan, 250013, Shandong, China
| |
Collapse
|
2
|
Liu Z, Kainth K, Zhou A, Deyer TW, Fayad ZA, Greenspan H, Mei X. A review of self-supervised, generative, and few-shot deep learning methods for data-limited magnetic resonance imaging segmentation. NMR IN BIOMEDICINE 2024; 37:e5143. [PMID: 38523402 DOI: 10.1002/nbm.5143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 02/15/2024] [Accepted: 02/16/2024] [Indexed: 03/26/2024]
Abstract
Magnetic resonance imaging (MRI) is a ubiquitous medical imaging technology with applications in disease diagnostics, intervention, and treatment planning. Accurate MRI segmentation is critical for diagnosing abnormalities, monitoring diseases, and deciding on a course of treatment. With the advent of advanced deep learning frameworks, fully automated and accurate MRI segmentation is advancing. Traditional supervised deep learning techniques have advanced tremendously, reaching clinical-level accuracy in the field of segmentation. However, these algorithms still require a large amount of annotated data, which is oftentimes unavailable or impractical. One way to circumvent this issue is to utilize algorithms that exploit a limited amount of labeled data. This paper aims to review such state-of-the-art algorithms that use a limited number of annotated samples. We explain the fundamental principles of self-supervised learning, generative models, few-shot learning, and semi-supervised learning and summarize their applications in cardiac, abdomen, and brain MRI segmentation. Throughout this review, we highlight algorithms that can be employed based on the quantity of annotated data available. We also present a comprehensive list of notable publicly available MRI segmentation datasets. To conclude, we discuss possible future directions of the field-including emerging algorithms, such as contrastive language-image pretraining, and potential combinations across the methods discussed-that can further increase the efficacy of image segmentation with limited labels.
Collapse
Affiliation(s)
- Zelong Liu
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Komal Kainth
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Alexander Zhou
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Timothy W Deyer
- East River Medical Imaging, New York, New York, USA
- Department of Radiology, Cornell Medicine, New York, New York, USA
| | - Zahi A Fayad
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Department of Diagnostic, Molecular, and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Hayit Greenspan
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Department of Diagnostic, Molecular, and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Xueyan Mei
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Department of Diagnostic, Molecular, and Interventional Radiology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| |
Collapse
|
3
|
Zhang D, Zong F, Zhang Q, Yue Y, Zhang F, Zhao K, Wang D, Wang P, Zhang X, Liu Y. Anat-SFSeg: Anatomically-guided superficial fiber segmentation with point-cloud deep learning. Med Image Anal 2024; 95:103165. [PMID: 38608510 DOI: 10.1016/j.media.2024.103165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 03/28/2024] [Accepted: 04/05/2024] [Indexed: 04/14/2024]
Abstract
Diffusion magnetic resonance imaging (dMRI) tractography is a critical technique to map the brain's structural connectivity. Accurate segmentation of white matter, particularly the superficial white matter (SWM), is essential for neuroscience and clinical research. However, it is challenging to segment SWM due to the short adjacent gyri connection in a U-shaped pattern. In this work, we propose an Anatomically-guided Superficial Fiber Segmentation (Anat-SFSeg) framework to improve the performance on SWM segmentation. The framework consists of a unique fiber anatomical descriptor (named FiberAnatMap) and a deep learning network based on point-cloud data. The spatial coordinates of fibers represented as point clouds, as well as the anatomical features at both the individual and group levels, are fed into a neural network. The network is trained on Human Connectome Project (HCP) datasets and tested on the subjects with a range of cognitive impairment levels. One new metric named fiber anatomical region proportion (FARP), quantifies the ratio of fibers in the defined brain regions and enables the comparison with other methods. Another metric named anatomical region fiber count (ARFC), represents the average fiber number in each cluster for the assessment of inter-subject differences. The experimental results demonstrate that Anat-SFSeg achieves the highest accuracy on HCP datasets and exhibits great generalization on clinical datasets. Diffusion tensor metrics and ARFC show disorder severity associated alterations in patients with Alzheimer's disease (AD) and mild cognitive impairments (MCI). Correlations with cognitive grades show that these metrics are potential neuroimaging biomarkers for AD. Furthermore, Anat-SFSeg could be utilized to explore other neurodegenerative, neurodevelopmental or psychiatric disorders.
Collapse
Affiliation(s)
- Di Zhang
- School of Airtificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
| | - Fangrong Zong
- School of Airtificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China.
| | - Qichen Zhang
- School of Airtificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
| | - Yunhui Yue
- School of Airtificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
| | - Fan Zhang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Kun Zhao
- School of Airtificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
| | - Dawei Wang
- Department of Radiology, Qilu Hospital of Shandong University, Jinan, China; Department of Epidemiology and Health Statistics, School of Public Health, Shandong University, Jinan, China; Institute of Brain and Brain-Inspired Science, Shandong University, Jinan, China
| | - Pan Wang
- Department of Neurology, Tianjin Huanhu Hospital, Tianjin, China
| | - Xi Zhang
- Department of Neurology, the Second Medical Centre, National Clinical Research Centre for Geriatric Diseases, Chinese PLA General Hospital, Beijing, China
| | - Yong Liu
- School of Airtificial Intelligence, Beijing University of Posts and Telecommunications, Beijing, China
| |
Collapse
|
4
|
Joshi A, Li H, Parikh NA, He L. A systematic review of automated methods to perform white matter tract segmentation. Front Neurosci 2024; 18:1376570. [PMID: 38567281 PMCID: PMC10985163 DOI: 10.3389/fnins.2024.1376570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Accepted: 03/04/2024] [Indexed: 04/04/2024] Open
Abstract
White matter tract segmentation is a pivotal research area that leverages diffusion-weighted magnetic resonance imaging (dMRI) for the identification and mapping of individual white matter tracts and their trajectories. This study aims to provide a comprehensive systematic literature review on automated methods for white matter tract segmentation in brain dMRI scans. Articles on PubMed, ScienceDirect [NeuroImage, NeuroImage (Clinical), Medical Image Analysis], Scopus and IEEEXplore databases and Conference proceedings of Medical Imaging Computing and Computer Assisted Intervention Society (MICCAI) and International Symposium on Biomedical Imaging (ISBI), were searched in the range from January 2013 until September 2023. This systematic search and review identified 619 articles. Adhering to the specified search criteria using the query, "white matter tract segmentation OR fiber tract identification OR fiber bundle segmentation OR tractography dissection OR white matter parcellation OR tract segmentation," 59 published studies were selected. Among these, 27% employed direct voxel-based methods, 25% applied streamline-based clustering methods, 20% used streamline-based classification methods, 14% implemented atlas-based methods, and 14% utilized hybrid approaches. The paper delves into the research gaps and challenges associated with each of these categories. Additionally, this review paper illuminates the most frequently utilized public datasets for tract segmentation along with their specific characteristics. Furthermore, it presents evaluation strategies and their key attributes. The review concludes with a detailed discussion of the challenges and future directions in this field.
Collapse
Affiliation(s)
- Ankita Joshi
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
- Neurodevelopmental Disorders Prevention Center, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - Hailong Li
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
- Neurodevelopmental Disorders Prevention Center, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Nehal A. Parikh
- Neurodevelopmental Disorders Prevention Center, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Lili He
- Imaging Research Center, Department of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
- Neurodevelopmental Disorders Prevention Center, Perinatal Institute, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, United States
- Computer Science, Biomedical Informatics, and Biomedical Engineering, University of Cincinnati, Cincinnati, OH, United States
| |
Collapse
|
5
|
Lv T, Hong X, Liu Y, Miao K, Sun H, Li L, Deng C, Jiang C, Pan X. AI-powered interpretable imaging phenotypes noninvasively characterize tumor microenvironment associated with diverse molecular signatures and survival in breast cancer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107857. [PMID: 37865058 DOI: 10.1016/j.cmpb.2023.107857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 08/23/2023] [Accepted: 10/08/2023] [Indexed: 10/23/2023]
Abstract
BACKGROUND AND OBJECTIVES Tumor microenvironment (TME) is a determining factor in decision-making and personalized treatment for breast cancer, which is highly intra-tumor heterogeneous (ITH). However, the noninvasive imaging phenotypes of TME are poorly understood, even invasive genotypes have been largely known in breast cancer. METHODS Here, we develop an artificial intelligence (AI)-driven approach for noninvasively characterizing TME by integrating the predictive power of deep learning with the explainability of human-interpretable imaging phenotypes (IMPs) derived from 4D dynamic imaging (DCE-MRI) of 342 breast tumors linked to genomic and clinical data, which connect cancer phenotypes to genotypes. An unsupervised dual-attention deep graph clustering model (DGCLM) is developed to divide bulk tumor into multiple spatially segregated and phenotypically consistent subclusters. The IMPs ranging from spatial heterogeneity to kinetic heterogeneity are leveraged to capture architecture, interaction, and proximity between intratumoral subclusters. RESULTS We demonstrate that our IMPs correlate with well-known markers of TME and also can predict distinct molecular signatures, including expression of hormone receptor, epithelial growth factor receptor and immune checkpoint proteins, with the performance of accuracy, reliability and transparency superior to recent state-of-the-art radiomics and 'black-box' deep learning methods. Moreover, prognostic value is confirmed by survival analysis accounting for IMPs. CONCLUSIONS Our approach provides an interpretable, quantitative, and comprehensive perspective to characterize TME in a noninvasive and clinically relevant manner.
Collapse
Affiliation(s)
- Tianxu Lv
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.
| | - Xiaoyan Hong
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.
| | - Yuan Liu
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.
| | - Kai Miao
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China
| | - Heng Sun
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China.
| | - Lihua Li
- Institute of Biomedical Engineering and Instrumentation, Hangzhou Dianzi University, Hangzhou 310018, China.
| | - Chuxia Deng
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China; MOE Frontier Science Centre for Precision Oncology, University of Macau, Macau SAR, China.
| | - Chunjuan Jiang
- Department of Nuclear Medicine, The Second Xiangya Hospital of Central South University, Changsha, Hunan, China.
| | - Xiang Pan
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China; Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China; MOE Frontier Science Centre for Precision Oncology, University of Macau, Macau SAR, China; Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai 201318, China.
| |
Collapse
|
6
|
Liu W, Zhuo Z, Liu Y, Ye C. One-shot segmentation of novel white matter tracts via extensive data augmentation and adaptive knowledge transfer. Med Image Anal 2023; 90:102968. [PMID: 37729793 DOI: 10.1016/j.media.2023.102968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 07/24/2023] [Accepted: 09/11/2023] [Indexed: 09/22/2023]
Abstract
The use of convolutional neural networks (CNNs) has allowed accurate white matter (WM) tract segmentation on diffusion magnetic resonance imaging (dMRI). To train the CNN-based segmentation models, a large number of scans on which WM tracts are annotated need to be collected, and these annotated scans can be accumulated over a long period of time. However, when novel WM tracts that are different from existing annotated WM tracts are of interest, additional annotations are required for their segmentation. Due to the cost of manual annotations, methods have been developed for few-shot segmentation of novel WM tracts, where the segmentation knowledge is transferred from existing WM tracts to novel WM tracts and the amount of annotated data for novel WM tracts is reduced. Despite these developments, it is desirable to further reduce the amount of annotated data to the one-shot setting with a single annotated image. To address this problem, we develop an approach to one-shot segmentation of novel WM tracts. Our method follows the existing pretraining/fine-tuning framework that transfers segmentation knowledge from existing to novel WM tracts. First, as there is extremely scarce annotated data in the one-shot setting, we design several different data augmentation strategies so that extensive data augmentation can be performed to obtain extra synthetic training data. The data augmentation strategies are based on image masking and thus applicable to the one-shot setting. Second, to address overfitting and knowledge forgetting in the fine-tuning stage that can be more severe given limited training data, we propose an adaptive knowledge transfer strategy that selects the network weights to be updated. The data augmentation and adaptive knowledge transfer strategies are combined to train the segmentation model. Considering that the different data augmentation strategies can generate synthetic data that contain potentially conflicting information, we apply the data augmentation strategies separately, each leading to a different segmentation model. The results predicted by the different models are fused to produce the final segmentation. We validated our method on two brain dMRI datasets, including a public dataset and an in-house dataset. Different settings were considered for the validation, and the results show that the proposed method improves the one-shot segmentation of novel WM tracts.
Collapse
Affiliation(s)
- Wan Liu
- School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing, China
| | - Zhizheng Zhuo
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Yaou Liu
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China.
| | - Chuyang Ye
- School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing, China.
| |
Collapse
|
7
|
Liu L, Chang J, Liu Z, Zhang P, Xu X, Shang H. Hybrid Contextual Semantic Network for Accurate Segmentation and Detection of Small-Size Stroke Lesions From MRI. IEEE J Biomed Health Inform 2023; 27:4062-4073. [PMID: 37155390 DOI: 10.1109/jbhi.2023.3273771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Stroke is a cerebrovascular disease with high mortality and disability rates. The occurrence of the stroke typically produces lesions of different sizes, with the accurate segmentation and detection of small-size stroke lesions being closely related to the prognosis of patients. However, the large lesions are usually correctly identified, the small-size lesions are usually ignored. This article provides a hybrid contextual semantic network (HCSNet) that can accurately and simultaneously segment and detect small-size stroke lesions from magnetic resonance images. HCSNet inherits the advantages of the encoder-decoder architecture and applies a novel hybrid contextual semantic module that generates high-quality contextual semantic features from the spatial and channel contextual semantic features through the skip connection layer. Moreover, a mixing-loss function is proposed to optimize HCSNet for unbalanced small-size lesions. HCSNet is trained and evaluated on 2D magnetic resonance images produced from the Anatomical Tracings of Lesions After Stroke challenge (ATLAS R2.0). Extensive experiments demonstrate that HCSNet outperforms several other state-of-the-art methods in its ability to segment and detect small-size stroke lesions. Visualization and ablation experiments reveal that the hybrid semantic module improves the segmentation and detection performance of HCSNet.
Collapse
|
8
|
Ghazi N, Aarabi MH, Soltanian-Zadeh H. Deep Learning Methods for Identification of White Matter Fiber Tracts: Review of State-of-the-Art and Future Prospective. Neuroinformatics 2023; 21:517-548. [PMID: 37328715 DOI: 10.1007/s12021-023-09636-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/20/2023] [Indexed: 06/18/2023]
Abstract
Quantitative analysis of white matter fiber tracts from diffusion Magnetic Resonance Imaging (dMRI) data is of great significance in health and disease. For example, analysis of fiber tracts related to anatomically meaningful fiber bundles is highly demanded in pre-surgical and treatment planning, and the surgery outcome depends on accurate segmentation of the desired tracts. Currently, this process is mainly done through time-consuming manual identification performed by neuro-anatomical experts. However, there is a broad interest in automating the pipeline such that it is fast, accurate, and easy to apply in clinical settings and also eliminates the intra-reader variabilities. Following the advancements in medical image analysis using deep learning techniques, there has been a growing interest in using these techniques for the task of tract identification as well. Recent reports on this application show that deep learning-based tract identification approaches outperform existing state-of-the-art methods. This paper presents a review of current tract identification approaches based on deep neural networks. First, we review the recent deep learning methods for tract identification. Next, we compare them with respect to their performance, training process, and network properties. Finally, we end with a critical discussion of open challenges and possible directions for future works.
Collapse
Affiliation(s)
- Nayereh Ghazi
- Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, 14399, Iran
| | - Mohammad Hadi Aarabi
- Department of Neuroscience, University of Padova, Padova, Italy
- Padova Neuroscience Center (PNC), University of Padova, Padova, Italy
| | - Hamid Soltanian-Zadeh
- Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, 14399, Iran.
- Medical Image Analysis Laboratory, Departments of Radiology and Research Administration, Henry Ford Health System, Detroit, MI, 48202, USA.
| |
Collapse
|
9
|
Xue T, Zhang F, Zhang C, Chen Y, Song Y, Golby AJ, Makris N, Rathi Y, Cai W, O'Donnell LJ. Superficial white matter analysis: An efficient point-cloud-based deep learning framework with supervised contrastive learning for consistent tractography parcellation across populations and dMRI acquisitions. Med Image Anal 2023; 85:102759. [PMID: 36706638 PMCID: PMC9975054 DOI: 10.1016/j.media.2023.102759] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 01/05/2023] [Accepted: 01/20/2023] [Indexed: 01/24/2023]
Abstract
Diffusion MRI tractography is an advanced imaging technique that enables in vivo mapping of the brain's white matter connections. White matter parcellation classifies tractography streamlines into clusters or anatomically meaningful tracts. It enables quantification and visualization of whole-brain tractography. Currently, most parcellation methods focus on the deep white matter (DWM), whereas fewer methods address the superficial white matter (SWM) due to its complexity. We propose a novel two-stage deep-learning-based framework, Superficial White Matter Analysis (SupWMA), that performs an efficient and consistent parcellation of 198 SWM clusters from whole-brain tractography. A point-cloud-based network is adapted to our SWM parcellation task, and supervised contrastive learning enables more discriminative representations between plausible streamlines and outliers for SWM. We train our model on a large-scale tractography dataset including streamline samples from labeled long- and medium-range (over 40 mm) SWM clusters and anatomically implausible streamline samples, and we perform testing on six independently acquired datasets of different ages and health conditions (including neonates and patients with space-occupying brain tumors). Compared to several state-of-the-art methods, SupWMA obtains highly consistent and accurate SWM parcellation results on all datasets, showing good generalization across the lifespan in health and disease. In addition, the computational speed of SupWMA is much faster than other methods.
Collapse
Affiliation(s)
- Tengfei Xue
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA; School of Computer Science, University of Sydney, Sydney, Australia
| | - Fan Zhang
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA.
| | - Chaoyi Zhang
- School of Computer Science, University of Sydney, Sydney, Australia
| | - Yuqian Chen
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA; School of Computer Science, University of Sydney, Sydney, Australia
| | - Yang Song
- School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
| | | | - Nikos Makris
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA; Center for Morphometric Analysis, Massachusetts General Hospital, Boston, USA
| | - Yogesh Rathi
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | - Weidong Cai
- School of Computer Science, University of Sydney, Sydney, Australia
| | | |
Collapse
|
10
|
Tchetchenian A, Zhu Y, Zhang F, O'Donnell LJ, Song Y, Meijering E. A comparison of manual and automated neural architecture search for white matter tract segmentation. Sci Rep 2023; 13:1617. [PMID: 36709392 PMCID: PMC9884270 DOI: 10.1038/s41598-023-28210-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Accepted: 01/16/2023] [Indexed: 01/30/2023] Open
Abstract
Segmentation of white matter tracts in diffusion magnetic resonance images is an important first step in many imaging studies of the brain in health and disease. Similar to medical image segmentation in general, a popular approach to white matter tract segmentation is to use U-Net based artificial neural network architectures. Despite many suggested improvements to the U-Net architecture in recent years, there is a lack of systematic comparison of architectural variants for white matter tract segmentation. In this paper, we evaluate multiple U-Net based architectures specifically for this purpose. We compare the results of these networks to those achieved by our own various architecture changes, as well as to new U-Net architectures designed automatically via neural architecture search (NAS). To the best of our knowledge, this is the first study to systematically compare multiple U-Net based architectures for white matter tract segmentation, and the first to use NAS. We find that the recently proposed medical imaging segmentation network UNet3+ slightly outperforms the current state of the art for white matter tract segmentation, and achieves a notably better mean Dice score for segmentation of the fornix (+ 0.01 and + 0.006 mean Dice increase for left and right fornix respectively), a tract that the current state of the art model struggles to segment. UNet3+ also outperforms the current state of the art when little training data is available. Additionally, manual architecture search found that a minor segmentation improvement is observed when an additional, deeper layer is added to the U-shape of UNet3+. However, all networks, including those designed via NAS, achieve similar results, suggesting that there may be benefit in exploring networks that deviate from the general U-Net paradigm.
Collapse
Affiliation(s)
- Ari Tchetchenian
- Biomedical Image Computing Group, School of Computer Science and Engineering, University of New South Wales (UNSW), Sydney, NSW, Australia.
| | - Yanming Zhu
- Biomedical Image Computing Group, School of Computer Science and Engineering, University of New South Wales (UNSW), Sydney, NSW, Australia
| | - Fan Zhang
- Brigham and Women's Hospital, Harvard Medical School, Boston, USA
| | | | - Yang Song
- Biomedical Image Computing Group, School of Computer Science and Engineering, University of New South Wales (UNSW), Sydney, NSW, Australia
| | - Erik Meijering
- Biomedical Image Computing Group, School of Computer Science and Engineering, University of New South Wales (UNSW), Sydney, NSW, Australia
| |
Collapse
|
11
|
Siegbahn M, Engmér Berglin C, Moreno R. Automatic segmentation of the core of the acoustic radiation in humans. Front Neurol 2022; 13:934650. [PMID: 36212647 PMCID: PMC9539320 DOI: 10.3389/fneur.2022.934650] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Accepted: 09/19/2022] [Indexed: 11/13/2022] Open
Abstract
Introduction Acoustic radiation is one of the most important white matter fiber bundles of the human auditory system. However, segmenting the acoustic radiation is challenging due to its small size and proximity to several larger fiber bundles. TractSeg is a method that uses a neural network to segment some of the major fiber bundles in the brain. This study aims to train TractSeg to segment the core of acoustic radiation. Methods We propose a methodology to automatically extract the acoustic radiation from human connectome data, which is both of high quality and high resolution. The segmentation masks generated by TractSeg of nearby fiber bundles are used to steer the generation of valid streamlines through tractography. Only streamlines connecting the Heschl's gyrus and the medial geniculate nucleus were considered. These streamlines are then used to create masks of the core of the acoustic radiation that is used to train the neural network of TractSeg. The trained network is used to automatically segment the acoustic radiation from unseen images. Results The trained neural network successfully extracted anatomically plausible masks of the core of the acoustic radiation in human connectome data. We also applied the method to a dataset of 17 patients with unilateral congenital ear canal atresia and 17 age- and gender-paired controls acquired in a clinical setting. The method was able to extract 53/68 acoustic radiation in the dataset acquired with clinical settings. In 14/68 cases, the method generated fragments of the acoustic radiation and completely failed in a single case. The performance of the method on patients and controls was similar. Discussion In most cases, it is possible to segment the core of the acoustic radiations even in images acquired with clinical settings in a few seconds using a pre-trained neural network.
Collapse
Affiliation(s)
- Malin Siegbahn
- Division of Ear, Nose and Throat Diseases, Department of Clinical Science, Intervention and Technology, Karolinska Institute, Stockholm, Sweden
- Medical Unit Ear, Nose, Throat and Hearing, Karolinska University Hospital, Stockholm, Sweden
| | - Cecilia Engmér Berglin
- Division of Ear, Nose and Throat Diseases, Department of Clinical Science, Intervention and Technology, Karolinska Institute, Stockholm, Sweden
- Medical Unit Ear, Nose, Throat and Hearing, Karolinska University Hospital, Stockholm, Sweden
| | - Rodrigo Moreno
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Stockholm, Sweden
- *Correspondence: Rodrigo Moreno
| |
Collapse
|
12
|
Shurrab S, Duwairi R. Self-supervised learning methods and applications in medical imaging analysis: a survey. PeerJ Comput Sci 2022; 8:e1045. [PMID: 36091989 PMCID: PMC9455147 DOI: 10.7717/peerj-cs.1045] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 06/27/2022] [Indexed: 05/12/2023]
Abstract
The scarcity of high-quality annotated medical imaging datasets is a major problem that collides with machine learning applications in the field of medical imaging analysis and impedes its advancement. Self-supervised learning is a recent training paradigm that enables learning robust representations without the need for human annotation which can be considered an effective solution for the scarcity of annotated medical data. This article reviews the state-of-the-art research directions in self-supervised learning approaches for image data with a concentration on their applications in the field of medical imaging analysis. The article covers a set of the most recent self-supervised learning methods from the computer vision field as they are applicable to the medical imaging analysis and categorize them as predictive, generative, and contrastive approaches. Moreover, the article covers 40 of the most recent research papers in the field of self-supervised learning in medical imaging analysis aiming at shedding the light on the recent innovation in the field. Finally, the article concludes with possible future research directions in the field.
Collapse
Affiliation(s)
- Saeed Shurrab
- Department of Computer Information Systems, Jordan University of Science and Technology, Irbid, Jordan
| | - Rehab Duwairi
- Department of Computer Information Systems, Jordan University of Science and Technology, Irbid, Jordan
| |
Collapse
|
13
|
Zhang F, Wells WM, O'Donnell LJ. Deep Diffusion MRI Registration (DDMReg): A Deep Learning Method for Diffusion MRI Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1454-1467. [PMID: 34968177 PMCID: PMC9273049 DOI: 10.1109/tmi.2021.3139507] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, we present a deep learning method, DDMReg, for accurate registration between diffusion MRI (dMRI) datasets. In dMRI registration, the goal is to spatially align brain anatomical structures while ensuring that local fiber orientations remain consistent with the underlying white matter fiber tract anatomy. DDMReg is a novel method that uses joint whole-brain and tract-specific information for dMRI registration. Based on the successful VoxelMorph framework for image registration, we propose a novel registration architecture that leverages not only whole brain information but also tract-specific fiber orientation information. DDMReg is an unsupervised method for deformable registration between pairs of dMRI datasets: it does not require nonlinearly pre-registered training data or the corresponding deformation fields as ground truth. We perform comparisons with four state-of-the-art registration methods on multiple independently acquired datasets from different populations (including teenagers, young and elderly adults) and different imaging protocols and scanners. We evaluate the registration performance by assessing the ability to align anatomically corresponding brain structures and ensure fiber spatial agreement between different subjects after registration. Experimental results show that DDMReg obtains significantly improved registration performance compared to the state-of-the-art methods. Importantly, we demonstrate successful generalization of DDMReg to dMRI data from different populations with varying ages and acquired using different acquisition protocols and different scanners.
Collapse
|
14
|
Lu Q, Liu W, Zhuo Z, Li Y, Duan Y, Yu P, Qu L, Ye C, Liu Y. A Transfer Learning Approach to Few-shot Segmentation of Novel White Matter Tracts. Med Image Anal 2022; 79:102454. [DOI: 10.1016/j.media.2022.102454] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Revised: 03/19/2022] [Accepted: 04/08/2022] [Indexed: 12/20/2022]
|
15
|
Hansen S, Gautam S, Jenssen R, Kampffmeyer M. Anomaly Detection-Inspired Few-Shot Medical Image Segmentation Through Self-Supervision With Supervoxels. Med Image Anal 2022; 78:102385. [DOI: 10.1016/j.media.2022.102385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 01/20/2022] [Accepted: 02/01/2022] [Indexed: 10/19/2022]
|
16
|
Volumetric Segmentation of White Matter Tracts with Label Embedding. Neuroimage 2022; 250:118934. [PMID: 35091078 DOI: 10.1016/j.neuroimage.2022.118934] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Revised: 01/04/2022] [Accepted: 01/24/2022] [Indexed: 11/23/2022] Open
Abstract
Convolutional neural networks have achieved state-of-the-art performance for white matter (WM) tract segmentation based on diffusion magnetic resonance imaging (dMRI). However, the segmentation can still be difficult for challenging WM tracts with thin bodies or complicated shapes; the segmentation is even more problematic in challenging scenarios with reduced data quality or domain shift between training and test data, which can be easily encountered in clinical settings. In this work, we seek to improve the segmentation of WM tracts, especially for challenging WM tracts in challenging scenarios. In particular, our method is based on volumetric WM tract segmentation, where voxels are directly labeled without performing tractography. To improve the segmentation, we exploit the characteristics of WM tracts that different tracts can cross or overlap and revise the network design accordingly. Specifically, because multiple tracts can co-exist in a voxel, we hypothesize that the different tract labels can be correlated. The tract labels at a single voxel are concatenated as a label vector, the length of which is the number of tract labels. Due to the tract correlation, this label vector can be projected into a lower-dimensional space-referred to as the embedded space-for each voxel, which allows the segmentation network to solve a simpler problem. By predicting the coordinate in the embedded space for the tracts at each voxel and subsequently mapping the coordinate to the label vector with a reconstruction module, the segmentation result can be achieved. To facilitate the learning of the embedded space, an auxiliary label reconstruction loss is integrated with the segmentation accuracy loss during network training, and network training and inference are end-to-end. Our method was validated on two dMRI datasets under various settings. The results show that the proposed method improves the accuracy of WM tract segmentation, and the improvement is more prominent for challenging tracts in challenging scenarios.
Collapse
|
17
|
CST: A Multitask Learning Framework for Colorectal Cancer Region Mining Based on Transformer. BIOMED RESEARCH INTERNATIONAL 2021; 2021:6207964. [PMID: 34671677 PMCID: PMC8523251 DOI: 10.1155/2021/6207964] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 08/30/2021] [Indexed: 11/17/2022]
Abstract
Colorectal cancer is a high death rate cancer until now; from the clinical view, the diagnosis of the tumour region is critical for the doctors. But with data accumulation, this task takes lots of time and labor with large variances between different doctors. With the development of computer vision, detection and segmentation of the colorectal cancer region from CT or MRI image series are a great challenge in the past decades, and there still have great demands on automatic diagnosis. In this paper, we proposed a novel transfer learning protocol, called CST, that is, a union framework for colorectal cancer region detection and segmentation task based on the transformer model, which effectively constructs the cancer region detection and its segmentation jointly. To make a higher detection accuracy, we incorporate an autoencoder-based image-level decision approach that leverages the image-level decision of a cancer slice. We also compared our framework with one-stage and two-stage object detection methods; the results show that our proposed method achieves better results on detection and segmentation tasks. And this proposed framework will give another pathway for colorectal cancer screen by way of artificial intelligence.
Collapse
|