1
|
Luetkens JA, Kravchenko D. Advancing MRI Technology with Deep Learning Super Resolution Reconstruction. Acad Radiol 2024:S1076-6332(24)00609-3. [PMID: 39232913 DOI: 10.1016/j.acra.2024.08.046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Accepted: 08/22/2024] [Indexed: 09/06/2024]
Affiliation(s)
- Julian A Luetkens
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Bonn, Germany (J.A.L., D.K.); Quantitative Imaging Laboratory Bonn, Bonn, Germany (J.A.L., D.K.).
| | - Dmitrij Kravchenko
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Bonn, Germany (J.A.L., D.K.); Quantitative Imaging Laboratory Bonn, Bonn, Germany (J.A.L., D.K.); Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, South Carolina, USA (D.K.)
| |
Collapse
|
2
|
Wang HC, Chen CS, Kuo CC, Huang TY, Kuo KH, Chuang TC, Lin YR, Chung HW. Comparative assessment of established and deep learning-based segmentation methods for hippocampal volume estimation in brain magnetic resonance imaging analysis. NMR IN BIOMEDICINE 2024; 37:e5169. [PMID: 38712667 DOI: 10.1002/nbm.5169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 03/21/2024] [Accepted: 04/05/2024] [Indexed: 05/08/2024]
Abstract
In this study, our objective was to assess the performance of two deep learning-based hippocampal segmentation methods, SynthSeg and TigerBx, which are readily available to the public. We contrasted their performance with that of two established techniques, FreeSurfer-Aseg and FSL-FIRST, using three-dimensional T1-weighted MRI scans (n = 1447) procured from public databases. Our evaluation focused on the accuracy and reproducibility of these tools in estimating hippocampal volume. The findings suggest that both SynthSeg and TigerBx are on a par with Aseg and FIRST in terms of segmentation accuracy and reproducibility, but offer a significant advantage in processing speed, generating results in less than 1 min compared with several minutes to hours for the latter tools. In terms of Alzheimer's disease classification based on the hippocampal atrophy rate, SynthSeg and TigerBx exhibited superior performance. In conclusion, we evaluated the capabilities of two deep learning-based segmentation techniques. The results underscore their potential value in clinical and research environments, particularly when investigating neurological conditions associated with hippocampal structures.
Collapse
Affiliation(s)
- Hsi-Chun Wang
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Chia-Sho Chen
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Chung-Chin Kuo
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Teng-Yi Huang
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Kuei-Hong Kuo
- Division of Medical Image, Far Eastern Memorial Hospital, New Taipei City, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Tzu-Chao Chuang
- Department of Electrical Engineering, National Sun Yat-Sen University, Kaohsiung, Taiwan
| | - Yi-Ru Lin
- Department of Electronic and Computer Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Hsiao-Wen Chung
- Department of Electrical Engineering, National Taiwan University, Taipei, Taiwan
| |
Collapse
|
3
|
Liu T, Zhang ZH, Zhou QH, Cheng QZ, Yang Y, Li JS, Zhang XM, Zhang JQ. MI-DenseCFNet: deep learning-based multimodal diagnosis models for Aureus and Aspergillus pneumonia. Eur Radiol 2024; 34:5066-5076. [PMID: 38231392 PMCID: PMC11254966 DOI: 10.1007/s00330-023-10578-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 12/12/2023] [Accepted: 12/14/2023] [Indexed: 01/18/2024]
Abstract
OBJECTIVE To build and merge a diagnostic model called multi-input DenseNet fused with clinical features (MI-DenseCFNet) for discriminating between Staphylococcus aureus pneumonia (SAP) and Aspergillus pneumonia (ASP) and to evaluate the significant correlation of each clinical feature in determining these two types of pneumonia using a random forest dichotomous diagnosis model. This will enhance diagnostic accuracy and efficiency in distinguishing between SAP and ASP. METHODS In this study, 60 patients with clinically confirmed SAP and ASP, who were admitted to four large tertiary hospitals in Kunming, China, were included. Thoracic high-resolution CT lung windows of all patients were extracted from the picture archiving and communication system, and the corresponding clinical data of each patient were collected. RESULTS The MI-DenseCFNet diagnosis model demonstrates an internal validation set with an area under the curve (AUC) of 0.92. Its external validation set demonstrates an AUC of 0.83. The model requires only 10.24s to generate a categorical diagnosis and produce results from 20 cases of data. Compared with high-, mid-, and low-ranking radiologists, the model achieves accuracies of 78% vs. 75% vs. 60% vs. 40%. Eleven significant clinical features were screened by the random forest dichotomous diagnosis model. CONCLUSION The MI-DenseCFNet multimodal diagnosis model can effectively diagnose SAP and ASP, and its diagnostic performance significantly exceeds that of junior radiologists. The 11 important clinical features were screened in the constructed random forest dichotomous diagnostic model, providing a reference for clinicians. CLINICAL RELEVANCE STATEMENT MI-DenseCFNet could provide diagnostic assistance for primary hospitals that do not have advanced radiologists, enabling patients with suspected infections like Staphylococcus aureus pneumonia or Aspergillus pneumonia to receive a quicker diagnosis and cut down on the abuse of antibiotics. KEY POINTS • MI-DenseCFNet combines deep learning neural networks with crucial clinical features to discern between Staphylococcus aureus pneumonia and Aspergillus pneumonia. • The comprehensive group had an area under the curve of 0.92, surpassing the proficiency of junior radiologists. • This model can enhance a primary radiologist's diagnostic capacity.
Collapse
Affiliation(s)
- Tong Liu
- The Second Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Kunming Medical University, No. 295, Xichang Road, Wuhua District, Kunming, Yunnan, 650032, People's Republic of China
| | - Zheng-Hua Zhang
- Medical Imaging Department, The First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, 650032, People's Republic of China
| | - Qi-Hao Zhou
- School of Information, Yunnan University, Kunming, Yunnan, 650032, People's Republic of China
| | - Qing-Zhao Cheng
- The Second Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Kunming Medical University, No. 295, Xichang Road, Wuhua District, Kunming, Yunnan, 650032, People's Republic of China
| | - Yue Yang
- The Second Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Kunming Medical University, No. 295, Xichang Road, Wuhua District, Kunming, Yunnan, 650032, People's Republic of China
| | - Jia-Shu Li
- The Second Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Kunming Medical University, No. 295, Xichang Road, Wuhua District, Kunming, Yunnan, 650032, People's Republic of China
| | - Xue-Mei Zhang
- The Second Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Kunming Medical University, No. 295, Xichang Road, Wuhua District, Kunming, Yunnan, 650032, People's Republic of China
| | - Jian-Qing Zhang
- The Second Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Kunming Medical University, No. 295, Xichang Road, Wuhua District, Kunming, Yunnan, 650032, People's Republic of China.
| |
Collapse
|
4
|
Cao Y, le Yu X, Yao H, Jin Y, Lin K, Shi C, Cheng H, Lin Z, Jiang J, Gao H, Shen M. ScLNet: A cornea with scleral lens OCT layers segmentation dataset and new multi-task model. Heliyon 2024; 10:e33911. [PMID: 39071564 PMCID: PMC11283045 DOI: 10.1016/j.heliyon.2024.e33911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 06/28/2024] [Accepted: 06/28/2024] [Indexed: 07/30/2024] Open
Abstract
Objective To develop deep learning methods with high accuracy for segmenting irregular corneas and detecting the tear fluid reservoir (TFR) boundary under the scleral lens. Additionally, this study aims to provide a publicly available cornea with scleral lens OCT dataset, including manually labeled layer masks for training and validation of segmentation algorithms. This study introduces ScLNet, a dataset comprising cornea with Scleral Lens (ScL) optical coherence tomography (OCT) images with layer annotations, and a multi-task network designed to achieve rapid, accurate, automated segmentation of scleral lens with regular and irregular corneas. Methods We created a dataset comprising 31,360 OCT images with scleral lens annotations. The network architecture includes an encoder with multi-scale input and a context coding layer, along with two decoders for specific tasks. The primary task focuses on predicting ScL, TFR, and cornea regions, while the auxiliary task, aimed at predicting the boundaries of ScL, TFR, and cornea, enhances feature extraction for the main task. Segmentation results were compared with state-of-the-art methods and evaluated using Dice similarity coefficient (DSC), intersection over union (IoU), Matthews correlation coefficient (MCC), Precision, and Hausdorff distance (HD). Results ScLNet achieves 98.22 % DSC, 96.50 % IoU, 98.13 % MCC, 98.35 % Precision, and 3.6840 HD (in pixels) in segmenting ScL; 97.78 % DSC, 95.66 % IoU, 97.71 % MCC, 97.70 % Precision, and 3.7838 HD (in pixels) in segmenting TFR; and 99.22 % DSC, 98.45 % IoU, 99.15 % MCC, 99.14 % Precision, and 3.5355 HD (in pixels) in segmenting cornea. The layer interfaces recognized by ScLNet closely align with expert annotations, as evidenced by high IoU scores. Boundary metrics further confirm its effectiveness. Conclusion We constructed a dataset of corneal OCT images with ScL wearing, which includes regular and irregular cornea patients. The proposed ScLNet achieves high accuracy in extracting ScL, TFR, and corneal layer masks and boundaries from OCT images of the dataset.
Collapse
Affiliation(s)
- Yang Cao
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital and School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, 325000, China
| | - Xiang le Yu
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital and School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, 325000, China
| | - Han Yao
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital and School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, 325000, China
| | - Yue Jin
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital and School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, 325000, China
| | - Kuangqing Lin
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital and School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, 325000, China
| | - Ce Shi
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital and School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, 325000, China
| | - Hongling Cheng
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital and School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, 325000, China
| | - Zhiyang Lin
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital and School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, 325000, China
| | - Jun Jiang
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital and School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, 325000, China
| | - Hebei Gao
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital and School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, 325000, China
- School of Artificial Intelligence, Wenzhou Polytechnic, Wenzhou, 325035, China
| | - Meixiao Shen
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital and School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, 325000, China
| |
Collapse
|
5
|
Nagawa K, Hara Y, Inoue K, Yamagishi Y, Koyama M, Shimizu H, Matsuura K, Osawa I, Inoue T, Okada H, Kobayashi N, Kozawa E. Three-dimensional convolutional neural network-based classification of chronic kidney disease severity using kidney MRI. Sci Rep 2024; 14:15775. [PMID: 38982238 PMCID: PMC11233566 DOI: 10.1038/s41598-024-66814-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Accepted: 07/04/2024] [Indexed: 07/11/2024] Open
Abstract
A three-dimensional convolutional neural network model was developed to classify the severity of chronic kidney disease (CKD) using magnetic resonance imaging (MRI) Dixon-based T1-weighted in-phase (IP)/opposed-phase (OP)/water-only (WO) imaging. Seventy-three patients with severe renal dysfunction (estimated glomerular filtration rate [eGFR] < 30 mL/min/1.73 m2, CKD stage G4-5); 172 with moderate renal dysfunction (30 ≤ eGFR < 60 mL/min/1.73 m2, CKD stage G3a/b); and 76 with mild renal dysfunction (eGFR ≥ 60 mL/min/1.73 m2, CKD stage G1-2) participated in this study. The model was applied to the right, left, and both kidneys, as well as to each imaging method (T1-weighted IP/OP/WO images). The best performance was obtained when using bilateral kidneys and IP images, with an accuracy of 0.862 ± 0.036. The overall accuracy was better for the bilateral kidney models than for the unilateral kidney models. Our deep learning approach using kidney MRI can be applied to classify patients with CKD based on the severity of kidney disease.
Collapse
Affiliation(s)
- Keita Nagawa
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Yuki Hara
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Kaiji Inoue
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan.
| | - Yosuke Yamagishi
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Masahiro Koyama
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Hirokazu Shimizu
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Koichiro Matsuura
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Iichiro Osawa
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Tsutomu Inoue
- Department of Nephrology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Hirokazu Okada
- Department of Nephrology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Naoki Kobayashi
- School of Biomedical Engineering, Faculty of Health and Medical Care, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Eito Kozawa
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| |
Collapse
|
6
|
Zhang L, Ning G, Liang H, Han B, Liao H. One-shot neuroanatomy segmentation through online data augmentation and confidence aware pseudo label. Med Image Anal 2024; 95:103182. [PMID: 38688039 DOI: 10.1016/j.media.2024.103182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 11/26/2023] [Accepted: 04/18/2024] [Indexed: 05/02/2024]
Abstract
Recently, deep learning-based brain segmentation methods have achieved great success. However, most approaches focus on supervised segmentation, which requires many high-quality labeled images. In this paper, we pay attention to one-shot segmentation, aiming to learn from one labeled image and a few unlabeled images. We propose an end-to-end unified network that joints deformation modeling and segmentation tasks. Our network consists of a shared encoder, a deformation modeling head, and a segmentation head. In the training phase, the atlas and unlabeled images are input to the encoder to get multi-scale features. The features are then fed to the multi-scale deformation modeling module to estimate the atlas-to-image deformation field. The deformation modeling module implements the estimation at the feature level in a coarse-to-fine manner. Then, we employ the field to generate the augmented image pair through online data augmentation. We do not apply any appearance transformations cause the shared encoder could capture appearance variations. Finally, we adopt supervised segmentation loss for the augmented image. Considering that the unlabeled images still contain rich information, we introduce confidence aware pseudo label for them to further boost the segmentation performance. We validate our network on three benchmark datasets. Experimental results demonstrate that our network significantly outperforms other deep single-atlas-based and traditional multi-atlas-based segmentation methods. Notably, the second dataset is collected from multi-center, and our network still achieves promising segmentation performance on both the seen and unseen test sets, revealing its robustness. The source code will be available at https://github.com/zhangliutong/brainseg.
Collapse
Affiliation(s)
- Liutong Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Guochen Ning
- School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Hanying Liang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Boxuan Han
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; School of Biomedical Engineering, Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
7
|
Rudroff T, Rainio O, Klén R. AI for the prediction of early stages of Alzheimer's disease from neuroimaging biomarkers - A narrative review of a growing field. Neurol Sci 2024:10.1007/s10072-024-07649-8. [PMID: 38866971 DOI: 10.1007/s10072-024-07649-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Accepted: 06/10/2024] [Indexed: 06/14/2024]
Abstract
OBJECTIVES The objectives of this narrative review are to summarize the current state of AI applications in neuroimaging for early Alzheimer's disease (AD) prediction and to highlight the potential of AI techniques in improving early AD diagnosis, prognosis, and management. METHODS We conducted a narrative review of studies using AI techniques applied to neuroimaging data for early AD prediction. We examined single-modality studies using structural MRI and PET imaging, as well as multi-modality studies integrating multiple neuroimaging techniques and biomarkers. Furthermore, they reviewed longitudinal studies that model AD progression and identify individuals at risk of rapid decline. RESULTS Single-modality studies using structural MRI and PET imaging have demonstrated high accuracy in classifying AD and predicting progression from mild cognitive impairment (MCI) to AD. Multi-modality studies, integrating multiple neuroimaging techniques and biomarkers, have shown improved performance and robustness compared to single-modality approaches. Longitudinal studies have highlighted the value of AI in modeling AD progression and identifying individuals at risk of rapid decline. However, challenges remain in data standardization, model interpretability, generalizability, clinical integration, and ethical considerations. CONCLUSION AI techniques applied to neuroimaging data have the potential to improve early AD diagnosis, prognosis, and management. Addressing challenges related to data standardization, model interpretability, generalizability, clinical integration, and ethical considerations is crucial for realizing the full potential of AI in AD research and clinical practice. Collaborative efforts among researchers, clinicians, and regulatory agencies are needed to develop reliable, robust, and ethical AI tools that can benefit AD patients and society.
Collapse
Affiliation(s)
- Thorsten Rudroff
- Department of Health and Human Physiology, University of Iowa, Iowa City, IA, 52242, USA.
- Department of Neurology, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA.
| | - Oona Rainio
- Turku PET Centre, University of Turku and Turku University Hospital, Turku, Finland
| | - Riku Klén
- Turku PET Centre, University of Turku and Turku University Hospital, Turku, Finland
| |
Collapse
|
8
|
Smolders L, De Baene W, Rutten GJ, van der Hofstad R, Florack L. Can structure predict function at individual level in the human connectome? Brain Struct Funct 2024; 229:1209-1223. [PMID: 38656375 PMCID: PMC11147846 DOI: 10.1007/s00429-024-02796-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 03/25/2024] [Indexed: 04/26/2024]
Abstract
Several studies predicting Functional Connectivity (FC) from Structural Connectivity (SC) at individual level have been published in recent years, each promising increased performance and utility. We investigated three of these studies, analyzing whether the results truly represent a meaningful individual-level mapping from SC to FC. Using data from the Human Connectome Project shared accross the three studies, we constructed a predictor by averaging FC of training data and analyzed its performance in the same way. In each case, we found that group average FC is an equivalent or better predictor of individual FC than the predictive models in terms of raw prediction performance. Furthermore, we showed that additional analyses performed by the authors of the three studies, in which they attempt to show that their predicted FC has value beyond raw prediction performance, could also be reproduced using the group average FC predictor. This makes it unclear whether any of the three methods represent a meaningful individual-level predictive model. We conclude that either the methods are not appropriate for the data, that the sample size is too small, or that the data does not contain sufficient information to learn a mapping from SC to FC. We advise future individual-level studies to explicitly report results in comparison to the performance of the group average, and carefully demonstrate that their predictions contain meaningful individual-level information. Finally, we believe that investigating alternatives for the construction of SC and FC may improve the chances of developing a meaningful individual-level mapping from SC to FC.
Collapse
Affiliation(s)
- Lars Smolders
- Eindhoven University of Technology , Department of Mathematics and Computer Science, PO Box 513, Eindhoven, 5600 MB, Netherlands.
- Elisabeth-TweeSteden Hospital, Department of Neurosurgery, Hilvarenbeekseweg 60, Tilburg, 5022 GC, The Netherlands.
| | - Wouter De Baene
- Tilburg University, Department of Cognitive Neuropsychology, Warandelaan 2, Tilburg, 5000 LE, Netherlands
| | - Geert-Jan Rutten
- Elisabeth-TweeSteden Hospital, Department of Neurosurgery, Hilvarenbeekseweg 60, Tilburg, 5022 GC, The Netherlands
| | - Remco van der Hofstad
- Eindhoven University of Technology , Department of Mathematics and Computer Science, PO Box 513, Eindhoven, 5600 MB, Netherlands
| | - Luc Florack
- Eindhoven University of Technology , Department of Mathematics and Computer Science, PO Box 513, Eindhoven, 5600 MB, Netherlands
| |
Collapse
|
9
|
Zhang X, Tian L, Guo S, Liu Y. STF-Net: sparsification transformer coding guided network for subcortical brain structure segmentation. BIOMED ENG-BIOMED TE 2024; 0:bmt-2023-0121. [PMID: 38712825 DOI: 10.1515/bmt-2023-0121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 04/15/2024] [Indexed: 05/08/2024]
Abstract
Subcortical brain structure segmentation plays an important role in the diagnosis of neuroimaging and has become the basis of computer-aided diagnosis. Due to the blurred boundaries and complex shapes of subcortical brain structures, labeling these structures by hand becomes a time-consuming and subjective task, greatly limiting their potential for clinical applications. Thus, this paper proposes the sparsification transformer (STF) module for accurate brain structure segmentation. The self-attention mechanism is used to establish global dependencies to efficiently extract the global information of the feature map with low computational complexity. Also, the shallow network is used to compensate for low-level detail information through the localization of convolutional operations to promote the representation capability of the network. In addition, a hybrid residual dilated convolution (HRDC) module is introduced at the bottom layer of the network to extend the receptive field and extract multi-scale contextual information. Meanwhile, the octave convolution edge feature extraction (OCT) module is applied at the skip connections of the network to pay more attention to the edge features of brain structures. The proposed network is trained with a hybrid loss function. The experimental evaluation on two public datasets: IBSR and MALC, shows outstanding performance in terms of objective and subjective quality.
Collapse
Affiliation(s)
- Xiufeng Zhang
- School of Mechanical and Electrical Engineering, 66455 Dalian Minzu University , Dalian, Liaoning, China
| | - Lingzhuo Tian
- School of Mechanical and Electrical Engineering, 66455 Dalian Minzu University , Dalian, Liaoning, China
| | - Shengjin Guo
- School of Mechanical and Electrical Engineering, 66455 Dalian Minzu University , Dalian, Liaoning, China
| | - Yansong Liu
- School of Mechanical and Electrical Engineering, 66455 Dalian Minzu University , Dalian, Liaoning, China
| |
Collapse
|
10
|
Nigro S, Filardi M, Tafuri B, Nicolardi M, De Blasi R, Giugno A, Gnoni V, Milella G, Urso D, Zoccolella S, Logroscino G. Deep Learning-based Approach for Brainstem and Ventricular MR Planimetry: Application in Patients with Progressive Supranuclear Palsy. Radiol Artif Intell 2024; 6:e230151. [PMID: 38506619 PMCID: PMC11140505 DOI: 10.1148/ryai.230151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 02/01/2024] [Accepted: 03/06/2024] [Indexed: 03/21/2024]
Abstract
Purpose To develop a fast and fully automated deep learning (DL)-based method for the MRI planimetric segmentation and measurement of the brainstem and ventricular structures most affected in patients with progressive supranuclear palsy (PSP). Materials and Methods In this retrospective study, T1-weighted MR images in healthy controls (n = 84) were used to train DL models for segmenting the midbrain, pons, middle cerebellar peduncle (MCP), superior cerebellar peduncle (SCP), third ventricle, and frontal horns (FHs). Internal, external, and clinical test datasets (n = 305) were used to assess segmentation model reliability. DL masks from test datasets were used to automatically extract midbrain and pons areas and the width of MCP, SCP, third ventricle, and FHs. Automated measurements were compared with those manually performed by an expert radiologist. Finally, these measures were combined to calculate the midbrain to pons area ratio, MR parkinsonism index (MRPI), and MRPI 2.0, which were used to differentiate patients with PSP (n = 71) from those with Parkinson disease (PD) (n = 129). Results Dice coefficients above 0.85 were found for all brain regions when comparing manual and DL-based segmentations. A strong correlation was observed between automated and manual measurements (Spearman ρ > 0.80, P < .001). DL-based measurements showed excellent performance in differentiating patients with PSP from those with PD, with an area under the receiver operating characteristic curve above 0.92. Conclusion The automated approach successfully segmented and measured the brainstem and ventricular structures. DL-based models may represent a useful approach to support the diagnosis of PSP and potentially other conditions associated with brainstem and ventricular alterations. Keywords: MR Imaging, Brain/Brain Stem, Segmentation, Quantification, Diagnosis, Convolutional Neural Network Supplemental material is available for this article. © RSNA, 2024 See also the commentary by Mohajer in this issue.
Collapse
Affiliation(s)
- Salvatore Nigro
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Marco Filardi
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Benedetta Tafuri
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Martina Nicolardi
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Roberto De Blasi
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Alessia Giugno
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Valentina Gnoni
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Giammarco Milella
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Daniele Urso
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Stefano Zoccolella
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| | - Giancarlo Logroscino
- From the Center for Neurodegenerative Diseases and the Aging Brain, University of Bari Aldo Moro, Pia Fondazione Cardinale G. Panico, 73039 Tricase, Italy (S.N., M.F., B.T., A.G., V.G., D.U., G.L.); Department of Translational Biomedicine and Neuroscience (DiBraiN), University of Bari Aldo Moro, Bari, Italy (M.F., B.T., G.M., G.L.); Department of Radiology, Pia Fondazione Cardinale G. Panico, Tricase, Italy (M.N., R.D.B.); Department of Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, England (D.U.); and Operative Unit of Neurology, San Paolo Hospital, ASL Bari, Bari, Italy (S.Z.)
| |
Collapse
|
11
|
Bongratz F, Rickmann AM, Wachinger C. Neural deformation fields for template-based reconstruction of cortical surfaces from MRI. Med Image Anal 2024; 93:103093. [PMID: 38281362 DOI: 10.1016/j.media.2024.103093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 12/19/2023] [Accepted: 01/22/2024] [Indexed: 01/30/2024]
Abstract
The reconstruction of cortical surfaces is a prerequisite for quantitative analyses of the cerebral cortex in magnetic resonance imaging (MRI). Existing segmentation-based methods separate the surface registration from the surface extraction, which is computationally inefficient and prone to distortions. We introduce Vox2Cortex-Flow (V2C-Flow), a deep mesh-deformation technique that learns a deformation field from a brain template to the cortical surfaces of an MRI scan. To this end, we present a geometric neural network that models the deformation-describing ordinary differential equation in a continuous manner. The network architecture comprises convolutional and graph-convolutional layers, which allows it to work with images and meshes at the same time. V2C-Flow is not only very fast, requiring less than two seconds to infer all four cortical surfaces, but also establishes vertex-wise correspondences to the template during reconstruction. In addition, V2C-Flow is the first approach for cortex reconstruction that models white matter and pial surfaces jointly, therefore avoiding intersections between them. Our comprehensive experiments on internal and external test data demonstrate that V2C-Flow results in cortical surfaces that are state-of-the-art in terms of accuracy. Moreover, we show that the established correspondences are more consistent than in FreeSurfer and that they can directly be utilized for cortex parcellation and group analyses of cortical thickness.
Collapse
Affiliation(s)
- Fabian Bongratz
- Laboratory for Artificial Intelligence in Medical Imaging, Department of Radiology, Technical University of Munich, Munich 81675, Germany; Munich Center for Machine Learning, Munich, Germany.
| | - Anne-Marie Rickmann
- Laboratory for Artificial Intelligence in Medical Imaging, Department of Radiology, Technical University of Munich, Munich 81675, Germany; Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-University, Munich 80336, Germany
| | - Christian Wachinger
- Laboratory for Artificial Intelligence in Medical Imaging, Department of Radiology, Technical University of Munich, Munich 81675, Germany; Department of Child and Adolescent Psychiatry, Ludwig-Maximilians-University, Munich 80336, Germany; Munich Center for Machine Learning, Munich, Germany
| |
Collapse
|
12
|
Svanera M, Savardi M, Signoroni A, Benini S, Muckli L. Fighting the scanner effect in brain MRI segmentation with a progressive level-of-detail network trained on multi-site data. Med Image Anal 2024; 93:103090. [PMID: 38241763 DOI: 10.1016/j.media.2024.103090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 10/30/2023] [Accepted: 01/12/2024] [Indexed: 01/21/2024]
Abstract
Many clinical and research studies of the human brain require accurate structural MRI segmentation. While traditional atlas-based methods can be applied to volumes from any acquisition site, recent deep learning algorithms ensure high accuracy only when tested on data from the same sites exploited in training (i.e., internal data). Performance degradation experienced on external data (i.e., unseen volumes from unseen sites) is due to the inter-site variability in intensity distributions, and to unique artefacts caused by different MR scanner models and acquisition parameters. To mitigate this site-dependency, often referred to as the scanner effect, we propose LOD-Brain, a 3D convolutional neural network with progressive levels-of-detail (LOD), able to segment brain data from any site. Coarser network levels are responsible for learning a robust anatomical prior helpful in identifying brain structures and their locations, while finer levels refine the model to handle site-specific intensity distributions and anatomical variations. We ensure robustness across sites by training the model on an unprecedentedly rich dataset aggregating data from open repositories: almost 27,000 T1w volumes from around 160 acquisition sites, at 1.5 - 3T, from a population spanning from 8 to 90 years old. Extensive tests demonstrate that LOD-Brain produces state-of-the-art results, with no significant difference in performance between internal and external sites, and robust to challenging anatomical variations. Its portability paves the way for large-scale applications across different healthcare institutions, patient populations, and imaging technology manufacturers. Code, model, and demo are available on the project website.
Collapse
Affiliation(s)
- Michele Svanera
- Center for Cognitive Neuroimaging at the School of Psychology & Neuroscience, University of Glasgow, UK.
| | - Mattia Savardi
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Italy
| | - Alberto Signoroni
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia, Italy
| | - Sergio Benini
- Department of Information Engineering, University of Brescia, Italy
| | - Lars Muckli
- Center for Cognitive Neuroimaging at the School of Psychology & Neuroscience, University of Glasgow, UK
| |
Collapse
|
13
|
Yu X, Tang Y, Yang Q, Lee HH, Bao S, Huo Y, Landman BA. Enhancing Hierarchical Transformers for Whole Brain Segmentation with Intracranial Measurements Integration. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2024; 12930:129300K. [PMID: 39220623 PMCID: PMC11364374 DOI: 10.1117/12.3009084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
Whole brain segmentation with magnetic resonance imaging (MRI) enables the non-invasive measurement of brain regions, including total intracranial volume (TICV) and posterior fossa volume (PFV). Enhancing the existing whole brain segmentation methodology to incorporate intracranial measurements offers a heightened level of comprehensiveness in the analysis of brain structures. Despite its potential, the task of generalizing deep learning techniques for intracranial measurements faces data availability constraints due to limited manually annotated atlases encompassing whole brain and TICV/PFV labels. In this paper, we enhancing the hierarchical transformer UNesT for whole brain segmentation to achieve segmenting whole brain with 133 classes and TICV/PFV simultaneously. To address the problem of data scarcity, the model is first pretrained on 4859 T1-weighted (T1w) 3D volumes sourced from 8 different sites. These volumes are processed through a multi-atlas segmentation pipeline for label generation, while TICV/PFV labels are unavailable. Subsequently, the model is finetuned with 45 T1w 3D volumes from Open Access Series Imaging Studies (OASIS) where both 133 whole brain classes and TICV/PFV labels are available. We evaluate our method with Dice similarity coefficients(DSC). We show that our model is able to conduct precise TICV/PFV estimation while maintaining the 132 brain regions performance at a comparable level. Code and trained model are available at: https://github.com/MASILab/UNesT/wholebrainSeg.
Collapse
Affiliation(s)
- Xin Yu
- Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Yucheng Tang
- Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
- Nvidia Corporation
| | - Qi Yang
- Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Ho Hin Lee
- Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Shunxing Bao
- Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Yuankai Huo
- Computer Science, Vanderbilt University, Nashville, TN, USA
- Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| | - Bennett A Landman
- Computer Science, Vanderbilt University, Nashville, TN, USA
- Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
14
|
Lorzel HM, Allen MD. Development of the next-generation functional neuro-cognitive imaging protocol - Part 1: A 3D sliding-window convolutional neural net for automated brain parcellation. Neuroimage 2024; 286:120505. [PMID: 38224825 DOI: 10.1016/j.neuroimage.2023.120505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 12/08/2023] [Accepted: 12/29/2023] [Indexed: 01/17/2024] Open
Abstract
Functional MRI has emerged as a powerful tool to assess the severity of Post-concussion syndrome (PCS) and to provide guidance for neuro-cognitive therapists during treatment. The next-generation functional neuro-cognitive imaging protocol (fNCI2) has been developed to provide this assessment. This paper covers the first step in the analysis process, the development of a rapidly re-trainable, machine-learning, brain parcellation tool. The use of a sufficiently deep U-Net architecture encompassing a small (39 × 39 × 39 voxel input, 27 × 27 × 27 voxel output) sliding window to sample the entirety of the 3D image allows for the prediction of the entire image using only a single trained network. A large number of training, validating, and testing windows are thus generated from the 101 manually-labeled Mindboggle images, and full-image prediction is provided via a voxel-vote method using overlapping windows. Our method produces parcellated images that are highly consistent with standard atlas-based methods in under 3 min on a modern GPU, and the single network architecture allows for rapid retraining (<36 hr) as needed.
Collapse
Affiliation(s)
- Heath M Lorzel
- Cognitive FX, 280 West River Drive, Suite 110, Provo, UT 84604, United States.
| | - Mark D Allen
- Cognitive FX, 280 West River Drive, Suite 110, Provo, UT 84604, United States
| |
Collapse
|
15
|
Deininger L, Jung-Klawitter S, Mikut R, Richter P, Fischer M, Karimian-Jazi K, Breckwoldt MO, Bendszus M, Heiland S, Kleesiek J, Opladen T, Kuseyri Hübschmann O, Hübschmann D, Schwarz D. An AI-based segmentation and analysis pipeline for high-field MR monitoring of cerebral organoids. Sci Rep 2023; 13:21231. [PMID: 38040865 PMCID: PMC10692072 DOI: 10.1038/s41598-023-48343-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 11/25/2023] [Indexed: 12/03/2023] Open
Abstract
Cerebral organoids recapitulate the structure and function of the developing human brain in vitro, offering a large potential for personalized therapeutic strategies. The enormous growth of this research area over the past decade with its capability for clinical translation makes a non-invasive, automated analysis pipeline of organoids highly desirable. This work presents a novel non-invasive approach to monitor and analyze cerebral organoids over time using high-field magnetic resonance imaging and state-of-the-art tools for automated image analysis. Three specific objectives are addressed, (I) organoid segmentation to investigate organoid development over time, (II) global cysticity classification and (III) local cyst segmentation for organoid quality assessment. We show that organoid growth can be monitored reliably over time and cystic and non-cystic organoids can be separated with high accuracy, with on par or better performance compared to state-of-the-art tools applied to brightfield imaging. Local cyst segmentation is feasible but could be further improved in the future. Overall, these results highlight the potential of the pipeline for clinical application to larger-scale comparative organoid analysis.
Collapse
Affiliation(s)
- Luca Deininger
- Group for Automated Image and Data Analysis, Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany.
- Division of Pediatric Neurology and Metabolic Medicine, Department I, Center for Pediatric and Adolescent Medicine, Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany.
| | - Sabine Jung-Klawitter
- Division of Pediatric Neurology and Metabolic Medicine, Department I, Center for Pediatric and Adolescent Medicine, Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Ralf Mikut
- Group for Automated Image and Data Analysis, Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Petra Richter
- Division of Pediatric Neurology and Metabolic Medicine, Department I, Center for Pediatric and Adolescent Medicine, Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Manuel Fischer
- Department of Neuroradiology, Heidelberg University Hospital, INF 400, Heidelberg, Germany
| | - Kianush Karimian-Jazi
- Department of Neuroradiology, Heidelberg University Hospital, INF 400, Heidelberg, Germany
| | - Michael O Breckwoldt
- Department of Neuroradiology, Heidelberg University Hospital, INF 400, Heidelberg, Germany
| | - Martin Bendszus
- Department of Neuroradiology, Heidelberg University Hospital, INF 400, Heidelberg, Germany
| | - Sabine Heiland
- Department of Neuroradiology, Heidelberg University Hospital, INF 400, Heidelberg, Germany
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Essen, Germany
- German Cancer Consortium (DKTK), Heidelberg, Germany
- Cancer Research Center Cologne Essen (CCCE), Essen, Germany
| | - Thomas Opladen
- Division of Pediatric Neurology and Metabolic Medicine, Department I, Center for Pediatric and Adolescent Medicine, Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Oya Kuseyri Hübschmann
- Division of Pediatric Neurology and Metabolic Medicine, Department I, Center for Pediatric and Adolescent Medicine, Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Daniel Hübschmann
- German Cancer Consortium (DKTK), Heidelberg, Germany
- Computational Oncology Group, Molecular Precision Oncology Program, National Center for Tumor Diseases (NCT) Heidelberg, DKFZ, Heidelberg, Germany
- Pattern Recognition and Digital Medicine, Heidelberg Institute for Stem Cell Technology and Experimental Medicine (HI-STEM), Heidelberg, Germany
| | - Daniel Schwarz
- Department of Neuroradiology, Heidelberg University Hospital, INF 400, Heidelberg, Germany
| |
Collapse
|
16
|
Meesters S, Landers M, Rutten GJ, Florack L. Subject-Specific Automatic Reconstruction of White Matter Tracts. J Digit Imaging 2023; 36:2648-2661. [PMID: 37537513 PMCID: PMC10584769 DOI: 10.1007/s10278-023-00883-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 07/05/2023] [Accepted: 07/05/2023] [Indexed: 08/05/2023] Open
Abstract
MRI-based tractography is still underexploited and unsuited for routine use in brain tumor surgery due to heterogeneity of methods and functional-anatomical definitions and above all, the lack of a turn-key system. Standardization of methods is therefore desirable, whereby an objective and reliable approach is a prerequisite before the results of any automated procedure can subsequently be validated and used in neurosurgical practice. In this work, we evaluated these preliminary but necessary steps in healthy volunteers. Specifically, we evaluated the robustness and reliability (i.e., test-retest reproducibility) of tractography results of six clinically relevant white matter tracts by using healthy volunteer data (N = 136) from the Human Connectome Project consortium. A deep learning convolutional network-based approach was used for individualized segmentation of regions of interest, combined with an evidence-based tractography protocol and appropriate post-tractography filtering. Robustness was evaluated by estimating the consistency of tractography probability maps, i.e., averaged tractograms in normalized space, through the use of a hold-out cross-validation approach. No major outliers were found, indicating a high robustness of the tractography results. Reliability was evaluated at the individual level. First by examining the overlap of tractograms that resulted from repeatedly processed identical MRI scans (N = 10, 10 iterations) to establish an upper limit of reliability of the pipeline. Second, by examining the overlap for subjects that were scanned twice at different time points (N = 40). Both analyses indicated high reliability, with the second analysis showing a reliability near the upper limit. The robust and reliable subject-specific generation of white matter tracts in healthy subjects holds promise for future validation of our pipeline in a clinical population and subsequent implementation in brain tumor surgery.
Collapse
Affiliation(s)
- Stephan Meesters
- Department of Mathematics & Computer Science, Eindhoven University of Technology, Eindhoven, The Netherlands
- Department of Neurosurgery, Elisabeth-Tweesteden Hospital, Tilburg, The Netherlands
| | - Maud Landers
- Department of Neurosurgery, Elisabeth-Tweesteden Hospital, Tilburg, The Netherlands
| | - Geert-Jan Rutten
- Department of Neurosurgery, Elisabeth-Tweesteden Hospital, Tilburg, The Netherlands.
| | - Luc Florack
- Department of Mathematics & Computer Science, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
17
|
Inoue K, Hara Y, Nagawa K, Koyama M, Shimizu H, Matsuura K, Takahashi M, Osawa I, Inoue T, Okada H, Ishikawa M, Kobayashi N, Kozawa E. The utility of automatic segmentation of kidney MRI in chronic kidney disease using a 3D convolutional neural network. Sci Rep 2023; 13:17361. [PMID: 37833438 PMCID: PMC10575938 DOI: 10.1038/s41598-023-44539-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 10/10/2023] [Indexed: 10/15/2023] Open
Abstract
We developed a 3D convolutional neural network (CNN)-based automatic kidney segmentation method for patients with chronic kidney disease (CKD) using MRI Dixon-based T1-weighted in-phase (IP)/opposed-phase (OP)/water-only (WO) images. The dataset comprised 100 participants with renal dysfunction (RD; eGFR < 45 mL/min/1.73 m2) and 70 without (non-RD; eGFR ≥ 45 mL/min/1.73 m2). The model was applied to the right, left, and both kidneys; it was first evaluated on the non-RD group data and subsequently on the combined data of the RD and non-RD groups. For bilateral kidney segmentation of the non-RD group, the best performance was obtained when using IP image, with a Dice score of 0.902 ± 0.034, average surface distance of 1.46 ± 0.75 mm, and a difference of - 27 ± 21 mL between ground-truth and automatically computed volume. Slightly worse results were obtained for the combined data of the RD and non-RD groups and for unilateral kidney segmentation, particularly when segmenting the right kidney from the OP images. Our 3D CNN-assisted automatic segmentation tools can be utilized in future studies on total kidney volume measurements and various image analyses of a large number of patients with CKD.
Collapse
Affiliation(s)
- Kaiji Inoue
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Yuki Hara
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Keita Nagawa
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan.
| | - Masahiro Koyama
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Hirokazu Shimizu
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Koichiro Matsuura
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Masao Takahashi
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Iichiro Osawa
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Tsutomu Inoue
- Department of Nephrology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Hirokazu Okada
- Department of Nephrology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Masahiro Ishikawa
- Department of Electronic Engineering and Computer Science, Faculty of Engineering, Kindai University Hiroshima Campus, 1 Takaya Umenobe, Higashi-Hiroshima City, Hiroshima, Japan
| | - Naoki Kobayashi
- School of Biomedical Engineering, Faculty of Health and Medical Care, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| | - Eito Kozawa
- Department of Radiology, Saitama Medical University, 38 Morohongou, Moroyama-machi, Iruma-gun, Saitama, Japan
| |
Collapse
|
18
|
Tregidgo HFJ, Soskic S, Althonayan J, Maffei C, Van Leemput K, Golland P, Insausti R, Lerma-Usabiaga G, Caballero-Gaudes C, Paz-Alonso PM, Yendiki A, Alexander DC, Bocchetta M, Rohrer JD, Iglesias JE. Accurate Bayesian segmentation of thalamic nuclei using diffusion MRI and an improved histological atlas. Neuroimage 2023; 274:120129. [PMID: 37088323 PMCID: PMC10636587 DOI: 10.1016/j.neuroimage.2023.120129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 03/30/2023] [Accepted: 04/20/2023] [Indexed: 04/25/2023] Open
Abstract
The human thalamus is a highly connected brain structure, which is key for the control of numerous functions and is involved in several neurological disorders. Recently, neuroimaging studies have increasingly focused on the volume and connectivity of the specific nuclei comprising this structure, rather than looking at the thalamus as a whole. However, accurate identification of cytoarchitectonically designed histological nuclei on standard in vivo structural MRI is hampered by the lack of image contrast that can be used to distinguish nuclei from each other and from surrounding white matter tracts. While diffusion MRI may offer such contrast, it has lower resolution and lacks some boundaries visible in structural imaging. In this work, we present a Bayesian segmentation algorithm for the thalamus. This algorithm combines prior information from a probabilistic atlas with likelihood models for both structural and diffusion MRI, allowing segmentation of 25 thalamic labels per hemisphere informed by both modalities. We present an improved probabilistic atlas, incorporating thalamic nuclei identified from histology and 45 white matter tracts surrounding the thalamus identified in ultra-high gradient strength diffusion imaging. We present a family of likelihood models for diffusion tensor imaging, ensuring compatibility with the vast majority of neuroimaging datasets that include diffusion MRI data. The use of these diffusion likelihood models greatly improves identification of nuclear groups versus segmentation based solely on structural MRI. Dice comparison of 5 manually identifiable groups of nuclei to ground truth segmentations show improvements of up to 10 percentage points. Additionally, our chosen model shows a high degree of reliability, with median test-retest Dice scores above 0.85 for four out of five nuclei groups, whilst also offering improved detection of differential thalamic involvement in Alzheimer's disease (AUROC 81.98%). The probabilistic atlas and segmentation tool will be made publicly available as part of the neuroimaging package FreeSurfer (https://freesurfer.net/fswiki/ThalamicNucleiDTI).
Collapse
Affiliation(s)
- Henry F J Tregidgo
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK.
| | - Sonja Soskic
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Juri Althonayan
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Chiara Maffei
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, USA
| | - Koen Van Leemput
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, USA; Department of Health Technology, Technical University of Denmark, Denmark
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| | - Ricardo Insausti
- Human Neuroanatomy Laboratory, University of Castilla-La Mancha, Spain
| | - Garikoitz Lerma-Usabiaga
- BCBL. Basque Center on Cognition, Brain and Language, Spain; Ikerbasque, Basque Foundation for Science, Bilbao, Spain
| | | | - Pedro M Paz-Alonso
- BCBL. Basque Center on Cognition, Brain and Language, Spain; Ikerbasque, Basque Foundation for Science, Bilbao, Spain
| | - Anastasia Yendiki
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, USA
| | - Daniel C Alexander
- Centre for Medical Image Computing, Department of Computer Science, University College London, UK
| | - Martina Bocchetta
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, UK; Centre for Cognitive and Clinical Neuroscience, Department of Life Sciences, College of Health, Medicine and Life Sciences, Brunel University London, UK
| | - Jonathan D Rohrer
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, UK
| | - Juan Eugenio Iglesias
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK; Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| |
Collapse
|
19
|
Zhu J, Chen Z, Zhao J, Yu Y, Li X, Shi K, Zhang F, Yu F, Shi K, Sun Z, Lin N, Zheng Y. Artificial intelligence in the diagnosis of dental diseases on panoramic radiographs: a preliminary study. BMC Oral Health 2023; 23:358. [PMID: 37270488 DOI: 10.1186/s12903-023-03027-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 05/09/2023] [Indexed: 06/05/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) has been introduced to interpret the panoramic radiographs (PRs). The aim of this study was to develop an AI framework to diagnose multiple dental diseases on PRs, and to initially evaluate its performance. METHODS The AI framework was developed based on 2 deep convolutional neural networks (CNNs), BDU-Net and nnU-Net. 1996 PRs were used for training. Diagnostic evaluation was performed on a separate evaluation dataset including 282 PRs. Sensitivity, specificity, Youden's index, the area under the curve (AUC), and diagnostic time were calculated. Dentists with 3 different levels of seniority (H: high, M: medium, L: low) diagnosed the same evaluation dataset independently. Mann-Whitney U test and Delong test were conducted for statistical analysis (ɑ=0.05). RESULTS Sensitivity, specificity, and Youden's index of the framework for diagnosing 5 diseases were 0.964, 0.996, 0.960 (impacted teeth), 0.953, 0.998, 0.951 (full crowns), 0.871, 0.999, 0.870 (residual roots), 0.885, 0.994, 0.879 (missing teeth), and 0.554, 0.990, 0.544 (caries), respectively. AUC of the framework for the diseases were 0.980 (95%CI: 0.976-0.983, impacted teeth), 0.975 (95%CI: 0.972-0.978, full crowns), and 0.935 (95%CI: 0.929-0.940, residual roots), 0.939 (95%CI: 0.934-0.944, missing teeth), and 0.772 (95%CI: 0.764-0.781, caries), respectively. AUC of the AI framework was comparable to that of all dentists in diagnosing residual roots (p > 0.05), and its AUC values were similar to (p > 0.05) or better than (p < 0.05) that of M-level dentists for diagnosing 5 diseases. But AUC of the framework was statistically lower than some of H-level dentists for diagnosing impacted teeth, missing teeth, and caries (p < 0.05). The mean diagnostic time of the framework was significantly shorter than that of all dentists (p < 0.001). CONCLUSIONS The AI framework based on BDU-Net and nnU-Net demonstrated high specificity on diagnosing impacted teeth, full crowns, missing teeth, residual roots, and caries with high efficiency. The clinical feasibility of AI framework was preliminary verified since its performance was similar to or even better than the dentists with 3-10 years of experience. However, the AI framework for caries diagnosis should be improved.
Collapse
Affiliation(s)
- Junhua Zhu
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
| | - Zhi Chen
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
| | - Jing Zhao
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
| | - Yueyuan Yu
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
| | - Xiaojuan Li
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang, China
| | - Kangjian Shi
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang, China
| | - Fan Zhang
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang, China
| | - Feifei Yu
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
| | - Keying Shi
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
| | - Zhe Sun
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
| | - Nengjie Lin
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
| | - Yuanna Zheng
- School/Hospital of Stomatology, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China.
| |
Collapse
|
20
|
Weng JS, Huang TY. Deriving a robust deep-learning model for subcortical brain segmentation by using a large-scale database: Preprocessing, reproducibility, and accuracy of volume estimation. NMR IN BIOMEDICINE 2023; 36:e4880. [PMID: 36419406 DOI: 10.1002/nbm.4880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 11/11/2022] [Accepted: 11/22/2022] [Indexed: 06/16/2023]
Abstract
Increasing the accuracy and reproducibility of subcortical brain segmentation is advantageous in various related clinical applications. In this study, we derived a segmentation method based on a convolutional neural network (i.e., U-Net) and a large-scale database consisting of 7039 brain T1-weighted MRI data samples. We evaluated the method by using experiments focused on three distinct topics, namely, the necessity of preprocessing steps, cross-institutional and longitudinal reproducibility, and volumetric accuracy. The optimized model, MX_RW-where "MX" is a mix of RW and nonuniform intensity normalization data and "RW" is raw data with basic preprocessing-did not require time-consuming preprocessing steps, such as nonuniform intensity normalization or image registration, for brain MRI before segmentation. Cross-institutional testing revealed that MX_RW (Dice similarity coefficient: 0.809, coefficient of variation: 4.6%, and Pearson's correlation coefficient: 0.979) exhibited a performance comparable with that of FreeSurfer (Dice similarity coefficient: 0.798, coefficient of variation: 5.6%, and Pearson's correlation coefficient: 0.973). The computation time per dataset of MX_RW was generally less than 5 s (even when tested without graphics processing units), which was notably faster than FreeSurfer. Thus, for time-restricted applications, MX_RW represents a competitive alternative to FreeSurfer.
Collapse
Affiliation(s)
- Jenn-Shiuan Weng
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Teng-Yi Huang
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| |
Collapse
|
21
|
Li Z, Zhang C, Zhang Y, Wang X, Ma X, Zhang H, Wu S. CAN: Context-assisted full Attention Network for brain tissue segmentation. Med Image Anal 2023; 85:102710. [PMID: 36586394 DOI: 10.1016/j.media.2022.102710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 11/08/2022] [Accepted: 11/29/2022] [Indexed: 12/24/2022]
Abstract
Brain tissue segmentation is of great value in diagnosing brain disorders. Three-dimensional (3D) and two-dimensional (2D) segmentation methods for brain Magnetic Resonance Imaging (MRI) suffer from high time complexity and low segmentation accuracy, respectively. To address these two issues, we propose a Context-assisted full Attention Network (CAN) for brain MRI segmentation by integrating 2D and 3D data of MRI. Different from the fully symmetric structure U-Net, the CAN takes the current 2D slice, its 3D contextual skull slices and 3D contextual brain slices as the input, which are further encoded by the DenseNet and decoded by our constructed full attention network. We have validated the effectiveness of the CAN on our collected dataset PWML and two public datasets dHCP2017 and MALC2012. Our code is available at https://github.com/nwuAI/CAN.
Collapse
Affiliation(s)
- Zhan Li
- School of Information Science and Technology, Northwest University, 710127, Xi'an, China.
| | - Chunxia Zhang
- School of Information Science and Technology, Northwest University, 710127, Xi'an, China
| | - Yongqin Zhang
- School of Information Science and Technology, Northwest University, 710127, Xi'an, China.
| | - Xiaofeng Wang
- School of Information Science and Technology, Northwest University, 710127, Xi'an, China.
| | - Xiaolong Ma
- School of Information Science and Technology, Northwest University, 710127, Xi'an, China
| | - Hai Zhang
- School of Mathematics, Northwest University, 710127, Xi'an, China
| | - Songdi Wu
- The First Affiliated Hospital of Northwest University, 710127, Xi'an, China
| |
Collapse
|
22
|
Zhang X, Liu Y, Guo S, Song Z. EG-Unet: Edge-Guided cascaded networks for automated frontal brain segmentation in MR images. Comput Biol Med 2023; 158:106891. [PMID: 37044048 DOI: 10.1016/j.compbiomed.2023.106891] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 03/07/2023] [Accepted: 04/01/2023] [Indexed: 04/05/2023]
Abstract
Accurate segmentation of frontal lobe areas on magnetic resonance imaging (MRI) can assist in diagnosing and managing idiopathic normal-pressure hydrocephalus. However, frontal lobe segmentation is challenging due to the complexity of the degree and shape of damage and the ambiguity of the boundaries of frontal lobe sites. Therefore, to extract the rich edge information and feature representation of the frontal lobe, this paper designs an edge guidance (EG) module to enhance the representation of edge features. Accordingly, an edge-guided cascade network framework (EG-Net) is proposed to segment frontal lobe parts automatically. Two-dimensional MRI slice images are fed into the edge generation and segmentation networks. First, the edge generation network extracts the edge information from the input image. Then, the edge information is sent to the EG module to generate an edge attention map for feature representation enhancement. Meanwhile, multi-scale attentional convolution (MSA) is utilized in the feature coding stage of the segmentation network to obtain feature responses from different perceptual fields in the coding stage and enrich the spatial context information. Besides, the feature fusion module is employed to selectively aggregate the multi-scale features in the coding stage with the edge features output by the EG module. Finally, the two components are fused, and a decoder recovers the spatial information to generate the final prediction results. An extensive quantitative comparison is performed on a publicly available brain MRI dataset (MICCAI 2012) to evaluate the effectiveness of the proposed algorithm. The experimental results indicate that the proposed method achieves an average DICE score of 95.77% compared to some advanced methods, which is 4.96% better than the classical U-Net. The results demonstrate the potential of the proposed EG-Net in improving the accuracy of frontal edge pixel classification through edge guidance.
Collapse
Affiliation(s)
- Xiufeng Zhang
- Mechanical and Electrical Engineering, Dalian Minzu University, Liaohe West Road 18, Dalian, China
| | - Yansong Liu
- Mechanical and Electrical Engineering, Dalian Minzu University, Liaohe West Road 18, Dalian, China.
| | - Shengjin Guo
- Mechanical and Electrical Engineering, Dalian Minzu University, Liaohe West Road 18, Dalian, China
| | - Zhao Song
- Shenzhen Hospital, Southern Medical University, Xinhu Road 1333, Shenzhen, China
| |
Collapse
|
23
|
Wang S, He J, He X, Liu Y, Lin X, Xu C, Zhu L, Kang J, Wang Y, Li Y, Guo S, Zhang Y, Luo Z, Liu Z. AES-CSFS: an automatic evaluation system for corneal sodium fluorescein staining based on deep learning. Ther Adv Chronic Dis 2023; 14:20406223221148266. [PMID: 36798527 PMCID: PMC9926379 DOI: 10.1177/20406223221148266] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Accepted: 12/13/2022] [Indexed: 02/15/2023] Open
Abstract
Background Corneal fluorescein sodium staining is a valuable diagnostic method for various ocular surface diseases. However, the examination results are highly dependent on the subjective experience of ophthalmologists. Objectives To develop an artificial intelligence system based on deep learning to provide an accurate quantitative assessment of sodium fluorescein staining score and the size of cornea epithelial patchy defect. Design A prospective study. Methods We proposed an artificial intelligence system for automatically evaluating corneal staining scores and accurately measuring patchy corneal epithelial defects based on corneal fluorescein sodium staining images. The design incorporates two segmentation models and a classification model to forecast and assess the stained images. Meanwhile, we compare the evaluation findings from the system with ophthalmologists with varying expertise. Results For the segmentation task of cornea boundary and cornea epithelial patchy defect area, our proposed method can achieve the performance of dice similarity coefficient (DSC) is 0.98/0.97 and Hausdorff distance (HD) is 3.60/8.39, respectively, when compared with the manually labeled gold standard. This method significantly outperforms the four leading algorithms (Unet, Unet++, Swin-Unet, and TransUnet). For the classification task, our algorithm achieves the best performance in accuracy, recall, and F1-score, which are 91.2%, 78.6%, and 79.2%, respectively. The performance of our developed system exceeds seven different approaches (Inception, ShuffleNet, Xception, EfficientNet_B7, DenseNet, ResNet, and VIT) in classification tasks. In addition, three ophthalmologists were selected to rate corneal staining images. The results showed that the performance of our artificial intelligence system significantly outperformed the junior doctors. Conclusion The system offers a promising automated assessment method for corneal fluorescein staining, decreasing incorrect evaluations caused by ophthalmologists' subjective variance and limited knowledge.
Collapse
Affiliation(s)
| | | | | | | | - Xiang Lin
- Department of Ophthalmology, Xiang’an Hospital of Xiamen University, Xiamen University, Xiamen, China
| | - Changsheng Xu
- Institute of Artificial Intelligence, Xiamen University, Xiamen, China
- Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
- Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Xiamen University, Xiamen, China
| | - Linfangzi Zhu
- Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
- Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Xiamen University, Xiamen, China
| | - Jie Kang
- Department of Ophthalmology, Xiang’an Hospital of Xiamen University, Xiamen University, Xiamen, China
| | - Yuqian Wang
- Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
- Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Xiamen University, Xiamen, China
| | - Yong Li
- Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
- Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Xiamen University, Xiamen, China
| | - Shujia Guo
- Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
- Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Xiamen University, Xiamen, China
| | - Yunuo Zhang
- Eye Institute of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
- Fujian Provincial Key Laboratory of Ophthalmology and Visual Science, Xiamen University, Xiamen, China
| | - Zhiming Luo
- Institute of Artificial Intelligence, Xiamen University, Xiamen, China
- School of Informatics, Xiamen University, 422 Siming South Road, Xiamen 361005, Fujian, China
| | | |
Collapse
|
24
|
Zeng F, Fang J, Muhashi A, Liu H. Direct reconstruction for simultaneous dual-tracer PET imaging based on multi-task learning. EJNMMI Res 2023; 13:7. [PMID: 36719532 PMCID: PMC9889598 DOI: 10.1186/s13550-023-00955-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 01/17/2023] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND Simultaneous dual-tracer positron emission tomography (PET) imaging can observe two molecular targets in a single scan, which is conducive to disease diagnosis and tracking. Since the signals emitted by different tracers are the same, it is crucial to separate each single tracer from the mixed signals. The current study proposed a novel deep learning-based method to reconstruct single-tracer activity distributions from the dual-tracer sinogram. METHODS We proposed the Multi-task CNN, a three-dimensional convolutional neural network (CNN) based on a framework of multi-task learning. One common encoder extracted features from the dual-tracer dynamic sinogram, followed by two distinct and parallel decoders which reconstructed the single-tracer dynamic images of two tracers separately. The model was evaluated by mean squared error (MSE), multiscale structural similarity (MS-SSIM) index and peak signal-to-noise ratio (PSNR) on simulated data and real animal data, and compared to the filtered back-projection method based on deep learning (FBP-CNN). RESULTS In the simulation experiments, the Multi-task CNN reconstructed single-tracer images with lower MSE, higher MS-SSIM and PSNR than FBP-CNN, and was more robust to the changes in individual difference, tracer combination and scanning protocol. In the experiment of rats with an orthotopic xenograft glioma model, the Multi-task CNN reconstructions also showed higher qualities than FBP-CNN reconstructions. CONCLUSIONS The proposed Multi-task CNN could effectively reconstruct the dynamic activity images of two single tracers from the dual-tracer dynamic sinogram, which was potential in the direct reconstruction for real simultaneous dual-tracer PET imaging data in future.
Collapse
Affiliation(s)
- Fuzhen Zeng
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Jingwan Fang
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Amanjule Muhashi
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Huafeng Liu
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China.
| |
Collapse
|
25
|
Doborjeh M, Liu X, Doborjeh Z, Shen Y, Searchfield G, Sanders P, Wang GY, Sumich A, Yan WQ. Prediction of Tinnitus Treatment Outcomes Based on EEG Sensors and TFI Score Using Deep Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:902. [PMID: 36679693 PMCID: PMC9861477 DOI: 10.3390/s23020902] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/21/2022] [Accepted: 01/03/2023] [Indexed: 06/17/2023]
Abstract
Tinnitus is a hearing disorder that is characterized by the perception of sounds in the absence of an external source. Currently, there is no pharmaceutical cure for tinnitus, however, multiple therapies and interventions have been developed that improve or control associated distress and anxiety. We propose a new Artificial Intelligence (AI) algorithm as a digital prognostic health system that models electroencephalographic (EEG) data in order to predict patients' responses to tinnitus therapies. The EEG data was collected from patients prior to treatment and 3-months following a sound-based therapy. Feature selection techniques were utilised to identify predictive EEG variables with the best accuracy. The patients' EEG features from both the frequency and functional connectivity domains were entered as inputs that carry knowledge extracted from EEG into AI algorithms for training and predicting therapy outcomes. The AI models differentiated the patients' outcomes into either therapy responder or non-responder, as defined by their Tinnitus Functional Index (TFI) scores, with accuracies ranging from 98%-100%. Our findings demonstrate the potential use of AI, including deep learning, for predicting therapy outcomes in tinnitus. The research suggests an optimal configuration of the EEG sensors that are involved in measuring brain functional changes in response to tinnitus treatments. It identified which EEG electrodes are the most informative sensors and how the EEG frequency and functional connectivity can better classify patients into the responder and non-responder groups. This has potential for real-time monitoring of patient therapy outcomes at home.
Collapse
Affiliation(s)
- Maryam Doborjeh
- Knowledge Engineering and Discovery Research Institute (KEDRI), School of Engineering Computer and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand
| | - Xiaoxu Liu
- Knowledge Engineering and Discovery Research Institute (KEDRI), School of Engineering Computer and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand
- Centre for Robotics & Vision (CeRV), Auckland University of Technology, Auckland 1010, New Zealand
| | - Zohreh Doborjeh
- Eisdell Moore Centre, Audiology, School of Population Health, The University of Auckland, Auckland 1010, New Zealand
- School of Psychology, The University of Waikato, Hamilton 3216, New Zealand
| | - Yuanyuan Shen
- Knowledge Engineering and Discovery Research Institute (KEDRI), School of Engineering Computer and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand
| | - Grant Searchfield
- Eisdell Moore Centre, Audiology, School of Population Health, The University of Auckland, Auckland 1010, New Zealand
| | - Philip Sanders
- Eisdell Moore Centre, Audiology, School of Population Health, The University of Auckland, Auckland 1010, New Zealand
| | - Grace Y. Wang
- School of Psychology and Wellbeing, University of Southern Queensland, Darling Heights, QLD 4350, Australia
- Centre for Health Research, University of Southern Queensland, Darling Heights, QLD 4350, Australia
| | - Alexander Sumich
- NTU Psychology, Nottingham Trent University, Nottingham NG1 4FQ, UK
| | - Wei Qi Yan
- Centre for Robotics & Vision (CeRV), Auckland University of Technology, Auckland 1010, New Zealand
| |
Collapse
|
26
|
Dinsdale NK, Bluemke E, Sundaresan V, Jenkinson M, Smith SM, Namburete AIL. Challenges for machine learning in clinical translation of big data imaging studies. Neuron 2022; 110:3866-3881. [PMID: 36220099 DOI: 10.1016/j.neuron.2022.09.012] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 08/27/2021] [Accepted: 09/08/2022] [Indexed: 12/15/2022]
Abstract
Combining deep learning image analysis methods and large-scale imaging datasets offers many opportunities to neuroscience imaging and epidemiology. However, despite these opportunities and the success of deep learning when applied to a range of neuroimaging tasks and domains, significant barriers continue to limit the impact of large-scale datasets and analysis tools. Here, we examine the main challenges and the approaches that have been explored to overcome them. We focus on issues relating to data availability, interpretability, evaluation, and logistical challenges and discuss the problems that still need to be tackled to enable the success of "big data" deep learning approaches beyond research.
Collapse
Affiliation(s)
- Nicola K Dinsdale
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Oxford Machine Learning in NeuroImaging Lab, OMNI, Department of Computer Science, University of Oxford, Oxford, UK.
| | - Emma Bluemke
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Vaanathi Sundaresan
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, USA
| | - Mark Jenkinson
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Australian Institute for Machine Learning (AIML), School of Computer Science, University of Adelaide, Adelaide, SA, Australia; South Australian Health and Medical Research Institute (SAHMRI), North Terrace, Adelaide, SA, Australia
| | - Stephen M Smith
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Ana I L Namburete
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Oxford Machine Learning in NeuroImaging Lab, OMNI, Department of Computer Science, University of Oxford, Oxford, UK
| |
Collapse
|
27
|
Rodrigues L, Rezende TJR, Wertheimer G, Santos Y, França M, Rittner L. A benchmark for hypothalamus segmentation on T1-weighted MR images. Neuroimage 2022; 264:119741. [PMID: 36368499 DOI: 10.1016/j.neuroimage.2022.119741] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 09/23/2022] [Accepted: 11/07/2022] [Indexed: 11/10/2022] Open
Abstract
The hypothalamus is a small brain structure that plays essential roles in sleep regulation, body temperature control, and metabolic homeostasis. Hypothalamic structural abnormalities have been reported in neuropsychiatric disorders, such as schizophrenia, amyotrophic lateral sclerosis, and Alzheimer's disease. Although mag- netic resonance (MR) imaging is the standard examination method for evaluating this region, hypothalamic morphological landmarks are unclear, leading to subjec- tivity and high variability during manual segmentation. Due to these limitations, it is common to find contradicting results in the literature regarding hypothalamic volumetry. To the best of our knowledge, only two automated methods are available in the literature for hypothalamus segmentation, the first of which is our previous method based on U-Net. However, both methods present performance losses when predicting images from different datasets than those used in training. Therefore, this project presents a benchmark consisting of a diverse T1-weighted MR image dataset comprising 1381 subjects from IXI, CC359, OASIS, and MiLI (the latter created specifically for this benchmark). All data were provided using automatically generated hypothalamic masks and a subset containing manually annotated masks. As a baseline, a method for fully automated segmentation of the hypothalamus on T1-weighted MR images with a greater generalization ability is presented. The pro- posed method is a teacher-student-based model with two blocks: segmentation and correction, where the second corrects the imperfections of the first block. After using three datasets for training (MiLI, IXI, and CC359), the prediction performance of the model was measured on two test sets: the first was composed of data from IXI, CC359, and MiLI, achieving a Dice coefficient of 0.83; the second was from OASIS, a dataset not used for training, achieving a Dice coefficient of 0.74. The dataset, the baseline model, and all necessary codes to reproduce the experiments are available at https://github.com/MICLab-Unicamp/HypAST and https://sites.google.com/ view/calgary-campinas-dataset/hypothalamus-benchmarking. In addition, a leaderboard will be maintained with predictions for the test set submitted by anyone working on the same task.
Collapse
Affiliation(s)
- Livia Rodrigues
- Medical Image Computing Lab, School of Electrical and Computer Engineering (FEEC), University of Campinas, Albert Einstein Street, 400, Campinas, SP 13083-887, Brazil.
| | - Thiago Junqueira Ribeiro Rezende
- Department of Neurology, School of Medical Sciences, University of Campinas, Tessalia Vieira de Camargo Street, 126, Campinas, SP 13083-887, Brazil
| | - Guilherme Wertheimer
- Department of Neurology, School of Medical Sciences, University of Campinas, Tessalia Vieira de Camargo Street, 126, Campinas, SP 13083-887, Brazil
| | - Yves Santos
- Department of Neurology, School of Medical Sciences, University of Campinas, Tessalia Vieira de Camargo Street, 126, Campinas, SP 13083-887, Brazil
| | - Marcondes França
- Department of Neurology, School of Medical Sciences, University of Campinas, Tessalia Vieira de Camargo Street, 126, Campinas, SP 13083-887, Brazil
| | - Leticia Rittner
- Medical Image Computing Lab, School of Electrical and Computer Engineering (FEEC), University of Campinas, Albert Einstein Street, 400, Campinas, SP 13083-887, Brazil
| |
Collapse
|
28
|
Hu Q, Wei Y, Li X, Wang C, Li J, Wang Y. EA-Net: Edge-aware network for brain structure segmentation via decoupled high and low frequency features. Comput Biol Med 2022; 150:106139. [PMID: 36209556 DOI: 10.1016/j.compbiomed.2022.106139] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 09/08/2022] [Accepted: 09/18/2022] [Indexed: 11/21/2022]
Abstract
Automatic brain structure segmentation in Magnetic Resonance Image (MRI) plays an important role in the diagnosis of various neuropsychiatric diseases. However, most existing methods yield unsatisfactory results due to blurred boundaries and complex structures. Improving the segmentation ability requires the model to be explicit about the spatial localization and shape appearance of targets, which correspond to the low-frequency content features and the high-frequency edge features, respectively. Therefore, in this paper, to extract rich edge and content feature representations, we focus on the composition of the feature and utilize a frequency decoupling (FD) block to separate the low-frequency and high-frequency parts of the feature. Further, a novel edge-aware network (EA-Net) is proposed for jointly learning to segment brain structures and detect object edges. First, an encoder-decoder sub-network is utilized to obtain multi-level information from the input MRI, which is then sent to the FD block to complete the frequency separation. Further, we use different mechanisms to optimize both the low-frequency and high-frequency features. Finally, these two parts are fused to generate the final prediction. In particular, we extract the content mask and the edge mask from the optimization feature with different supervisions, which forces the network to learn the boundary features of the object. Extensive experiments are performed on two public brain MRI T1 scan datasets (the IBSR dataset and the MALC dataset) to evaluate the effectiveness of the proposed algorithm. The experiments show that the EA-Net achieves the best performance compared with the state-of-the-art methods, and improves the segmentation DSC score by 1.31% at most compared with the U-Net model and its variants. Moreover, we evaluate the EA-Net under different noise disturbances, and the results demonstrate the robustness and superiority of our method under low-quality noisy MRI. Code is available at https://github.com/huqian999/EA-Net.
Collapse
Affiliation(s)
- Qian Hu
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China.
| | - Ying Wei
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China; Information Technology R&D Innovation Center of Peking University, Shaoxing, China; Changsha Hisense Intelligent System Research Institute Co., Ltd., China.
| | - Xiang Li
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China.
| | - Chuyuan Wang
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China.
| | - Jiaguang Li
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China.
| | - Yuefeng Wang
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China.
| |
Collapse
|
29
|
Cai Y, Wu J, Dai Q. Review on data analysis methods for mesoscale neural imaging in vivo. NEUROPHOTONICS 2022; 9:041407. [PMID: 35450225 PMCID: PMC9010663 DOI: 10.1117/1.nph.9.4.041407] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 03/23/2022] [Indexed: 06/14/2023]
Abstract
Significance: Mesoscale neural imaging in vivo has gained extreme popularity in neuroscience for its capacity of recording large-scale neurons in action. Optical imaging with single-cell resolution and millimeter-level field of view in vivo has been providing an accumulated database of neuron-behavior correspondence. Meanwhile, optical detection of neuron signals is easily contaminated by noises, background, crosstalk, and motion artifacts, while neural-level signal processing and network-level coordinate are extremely complicated, leading to laborious and challenging signal processing demands. The existing data analysis procedure remains unstandardized, which could be daunting to neophytes or neuroscientists without computational background. Aim: We hope to provide a general data analysis pipeline of mesoscale neural imaging shared between imaging modalities and systems. Approach: We divide the pipeline into two main stages. The first stage focuses on extracting high-fidelity neural responses at single-cell level from raw images, including motion registration, image denoising, neuron segmentation, and signal extraction. The second stage focuses on data mining, including neural functional mapping, clustering, and brain-wide network deduction. Results: Here, we introduce the general pipeline of processing the mesoscale neural images. We explain the principles of these procedures and compare different approaches and their application scopes with detailed discussions about the shortcomings and remaining challenges. Conclusions: There are great challenges and opportunities brought by the large-scale mesoscale data, such as the balance between fidelity and efficiency, increasing computational load, and neural network interpretability. We believe that global circuits on single-neuron level will be more extensively explored in the future.
Collapse
Affiliation(s)
- Yeyi Cai
- Tsinghua University, Department of Automation, Beijing, China
| | - Jiamin Wu
- Tsinghua University, Department of Automation, Beijing, China
| | - Qionghai Dai
- Tsinghua University, Department of Automation, Beijing, China
| |
Collapse
|
30
|
Haffner‐Staton E, Avanzini L, La Rocca A, Pfau SA, Cairns A. Automated particle recognition for engine soot nanoparticles. J Microsc 2022; 288:28-39. [PMID: 36065981 PMCID: PMC9826170 DOI: 10.1111/jmi.13140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 07/31/2022] [Accepted: 08/31/2022] [Indexed: 01/11/2023]
Abstract
A pre-trained convolution neural network based on residual error functions (ResNet) was applied to the classification of soot and non-soot carbon nanoparticles in TEM images. Two depths of ResNet, one 18 layers deep and the other 50 layers deep, were trained using training-validation sets of increasing size (containing 100, 400 and 1400 images) and were assessed using an independent test set of 200 images. Network training was optimised in terms of mini-batch size, learning rate and training length. In all tests, ResNet18 and ResNet50 had statistically similar performances, though ResNet18 required only 25-35% of the training time of ResNet50. Training using the 100-, 400- and 1400-image training-validation sets led to classification accuracies of 84%, 88% and 95%, respectively. ResNet18 and ResNet50 were also compared for their ability to categorise soot and non-soot nanoparticles via a fivefold cross-validation experiment using the entire set of 800 images of soot and 800 images of non-soot. Cross-validation was repeated 3 times with different training durations. For all cross-validation experiments, classification accuracy exceeded 91%, with no statistical differences between any of the network trainings. The most efficient network was ResNet18 trained for 5 epochs, which reached 91.2% classification after only 84 s of training on 1600 images. Use of ResNet for classification of 1000 images, the amount suggested for reliable characterisation of soot sample, requires <4 s, compared with >30 min for a skilled operator classifying images manually. Use of convolution neural networks for classification of soot and non-soot nanoparticles in TEM images is highly promising, particularly when manually classified data sets have already been established.
Collapse
Affiliation(s)
- E. Haffner‐Staton
- Department of Mechanical, Materials and Manufacturing EngineeringUniversity of NottinghamUniversity ParkNottinghamshireUK
| | - L. Avanzini
- Department of Mechanical, Materials and Manufacturing EngineeringUniversity of NottinghamUniversity ParkNottinghamshireUK
| | - A. La Rocca
- Department of Mechanical, Materials and Manufacturing EngineeringUniversity of NottinghamUniversity ParkNottinghamshireUK
| | - S. A. Pfau
- Department of Mechanical, Materials and Manufacturing EngineeringUniversity of NottinghamUniversity ParkNottinghamshireUK
| | - A. Cairns
- Department of Mechanical, Materials and Manufacturing EngineeringUniversity of NottinghamUniversity ParkNottinghamshireUK
| |
Collapse
|
31
|
Casamitjana A, Iglesias JE. High-resolution atlasing and segmentation of the subcortex: Review and perspective on challenges and opportunities created by machine learning. Neuroimage 2022; 263:119616. [PMID: 36084858 DOI: 10.1016/j.neuroimage.2022.119616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 08/30/2022] [Accepted: 09/05/2022] [Indexed: 11/17/2022] Open
Abstract
This paper reviews almost three decades of work on atlasing and segmentation methods for subcortical structures in human brain MRI. In writing this survey, we have three distinct aims. First, to document the evolution of digital subcortical atlases of the human brain, from the early MRI templates published in the nineties, to the complex multi-modal atlases at the subregion level that are available today. Second, to provide a detailed record of related efforts in the automated segmentation front, from earlier atlas-based methods to modern machine learning approaches. And third, to present a perspective on the future of high-resolution atlasing and segmentation of subcortical structures in in vivo human brain MRI, including open challenges and opportunities created by recent developments in machine learning.
Collapse
Affiliation(s)
- Adrià Casamitjana
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK.
| | - Juan Eugenio Iglesias
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK; Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, USA
| |
Collapse
|
32
|
|
33
|
Pace DF, Dalca AV, Brosch T, Geva T, Powell AJ, Weese J, Moghari MH, Golland P. Learned iterative segmentation of highly variable anatomy from limited data: Applications to whole heart segmentation for congenital heart disease. Med Image Anal 2022; 80:102469. [PMID: 35640385 PMCID: PMC9617683 DOI: 10.1016/j.media.2022.102469] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 04/26/2022] [Accepted: 04/29/2022] [Indexed: 02/08/2023]
Abstract
Training deep learning models that segment an image in one step typically requires a large collection of manually annotated images that captures the anatomical variability in a cohort. This poses challenges when anatomical variability is extreme but training data is limited, as when segmenting cardiac structures in patients with congenital heart disease (CHD). In this paper, we propose an iterative segmentation model and show that it can be accurately learned from a small dataset. Implemented as a recurrent neural network, the model evolves a segmentation over multiple steps, from a single user click until reaching an automatically determined stopping point. We develop a novel loss function that evaluates the entire sequence of output segmentations, and use it to learn model parameters. Segmentations evolve predictably according to growth dynamics encapsulated by training data, which consists of images, partially completed segmentations, and the recommended next step. The user can easily refine the final segmentation by examining those that are earlier or later in the output sequence. Using a dataset of 3D cardiac MR scans from patients with a wide range of CHD types, we show that our iterative model offers better generalization to patients with the most severe heart malformations.
Collapse
Affiliation(s)
- Danielle F Pace
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA; A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| | - Adrian V Dalca
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA; A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Tom Brosch
- Philips Research Laboratories, Hamburg, Germany
| | - Tal Geva
- Department of Cardiology, Boston Children's Hospital, Boston, MA, USA; Department of Pediatrics, Harvard Medical School, Boston, MA, USA
| | - Andrew J Powell
- Department of Cardiology, Boston Children's Hospital, Boston, MA, USA; Department of Pediatrics, Harvard Medical School, Boston, MA, USA
| | | | - Mehdi H Moghari
- Department of Cardiology, Boston Children's Hospital, Boston, MA, USA; Department of Pediatrics, Harvard Medical School, Boston, MA, USA
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
34
|
Jäncke L, Valizadeh SA. Identification of individual subjects based on neuroanatomical measures obtained seven years earlier. Eur J Neurosci 2022; 56:4642-4652. [PMID: 35831945 PMCID: PMC9543309 DOI: 10.1111/ejn.15770] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 06/07/2022] [Accepted: 07/05/2022] [Indexed: 11/30/2022]
Abstract
We analyzed a dataset comprising 118 subjects who were scanned three times (at baseline, 1-year follow-up, and 7-year follow-up) using structural MRI over the course of seven years. We aimed to examine whether it is possible to identify individual subjects based on a restricted number of neuroanatomical features measured 7 years previously. We used FreeSurfer to compute 15 standard brain measures (total intracranial volume (ICV), total cortical thickness (CT), total cortical surface area (CA), cortical gray matter (CoGM), cerebral white matter (CeWM), cerebellar cortex (CBGM), cerebellar white matter (CBWM), subcortical volumes [thalamus, putamen, pallidum, caudatus, hippocampus, amygdala, accumbens], and brain stem volume). We used linear discriminant analysis (LDA), random forest machine learning (RF), and a newly developed rule-based identification approach (RBIA) for the identification process. Using RBIA, different sets of neuroanatomical features (ranging from 2 to 14) obtained at baseline were combined by if-then rules and compared to the same set of neuroanatomical features derived from the 7-year follow-up measurement. We achieved excellent identification results with LDA, while the identification results for RF were very good but not perfect. The RBIA produced the best results, achieving perfect participant identification for some 4-feature sets. The identification results improved substantially when using larger feature sets, with 14 neuroanatomical features providing perfect identification. Thus, this study shows again that the human brain is highly individual in terms of neuroanatomical features. These results are discussed in the context of the current literature on brain plasticity and the scientific attempts to develop brain-fingerprinting techniques.
Collapse
Affiliation(s)
- L Jäncke
- Division Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland.,University Research Priority Program "Dynamics of Healthy Aging," University Zurich
| | - S A Valizadeh
- Division Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland
| |
Collapse
|
35
|
SM-SegNet: A Lightweight Squeeze M-SegNet for Tissue Segmentation in Brain MRI Scans. SENSORS 2022; 22:s22145148. [PMID: 35890829 PMCID: PMC9319649 DOI: 10.3390/s22145148] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 07/02/2022] [Accepted: 07/05/2022] [Indexed: 11/17/2022]
Abstract
In this paper, we propose a novel squeeze M-SegNet (SM-SegNet) architecture featuring a fire module to perform accurate as well as fast segmentation of the brain on magnetic resonance imaging (MRI) scans. The proposed model utilizes uniform input patches, combined-connections, long skip connections, and squeeze-expand convolutional layers from the fire module to segment brain MRI data. The proposed SM-SegNet architecture involves a multi-scale deep network on the encoder side and deep supervision on the decoder side, which uses combined-connections (skip connections and pooling indices) from the encoder to the decoder layer. The multi-scale side input layers support the deep network layers' extraction of discriminative feature information, and the decoder side provides deep supervision to reduce the gradient problem. By using combined-connections, extracted features can be transferred from the encoder to the decoder resulting in recovering spatial information, which makes the model converge faster. Long skip connections were used to stabilize the gradient updates in the network. Owing to the adoption of the fire module, the proposed model was significantly faster to train and offered a more efficient memory usage with 83% fewer parameters than previously developed methods, owing to the adoption of the fire module. The proposed method was evaluated using the open-access series of imaging studies (OASIS) and the internet brain segmentation registry (IBSR) datasets. The experimental results demonstrate that the proposed SM-SegNet architecture achieves segmentation accuracies of 95% for cerebrospinal fluid, 95% for gray matter, and 96% for white matter, which outperforms the existing methods in both subjective and objective metrics in brain MRI segmentation.
Collapse
|
36
|
Li X, Wei Y, Wang C, Hu Q, Liu C. Contextual-wise discriminative feature extraction and robust network learning for subcortical structure segmentation. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03848-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
37
|
SVF-Net: spatial and visual feature enhancement network for brain structure segmentation. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03706-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
38
|
Zhao YX, Zhang YM, Song M, Liu CL. Adaptable Global Network for Whole-Brain Segmentation with Symmetry Consistency Loss. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10011-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
39
|
Role of artificial intelligence in MS clinical practice. Neuroimage Clin 2022; 35:103065. [PMID: 35661470 PMCID: PMC9163993 DOI: 10.1016/j.nicl.2022.103065] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 05/04/2022] [Accepted: 05/26/2022] [Indexed: 11/24/2022]
Abstract
For medical applications, machine learning (including deep learning) are the most commonly used artificial intelligence (AI) approaches. It can improve multiple sclerosis (MS) diagnosis, prognostication and treatment monitoring. Thanks to AI, MRI and cognitive phenotypes of MS patients were identified. AI can shorten MRI protocols for MS, allowing the application of advanced techniques. It can reduce the human effort for MRI analysis, especially for lesion segmentation.
Machine learning (ML) and its subset, deep learning (DL), are branches of artificial intelligence (AI) showing promising findings in the medical field, especially when applied to imaging data. Given the substantial role of MRI in the diagnosis and management of patients with multiple sclerosis (MS), this disease is an ideal candidate for the application of AI techniques. In this narrative review, we are going to discuss the potential applications of AI for MS clinical practice, together with their limitations. Among their several advantages, ML algorithms are able to automate repetitive tasks, to analyze more data in less time and to achieve higher accuracy and reproducibility than the human counterpart. To date, these algorithms have been applied to MS diagnosis, prognosis, disease and treatment monitoring. Other fields of application have been improvement of MRI protocols as well as automated lesion and tissue segmentation. However, several challenges remain, including a better understanding of the information selected by AI algorithms, appropriate multicenter and longitudinal validations of results and practical aspects regarding hardware and software integration. Finally, one cannot overemphasize the paramount importance of human supervision, in order to optimize the use and take full advantage of the potential of AI approaches.
Collapse
|
40
|
Wang Y, Haghpanah FS, Zhang X, Santamaria K, da Costa Aguiar Alves GK, Bruno E, Aw N, Maddocks A, Duarte CS, Monk C, Laine A, Posner J. ID-Seg: an infant deep learning-based segmentation framework to improve limbic structure estimates. Brain Inform 2022; 9:12. [PMID: 35633447 PMCID: PMC9148335 DOI: 10.1186/s40708-022-00161-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 05/05/2022] [Indexed: 11/10/2022] Open
Abstract
Infant brain magnetic resonance imaging (MRI) is a promising approach for studying early neurodevelopment. However, segmenting small regions such as limbic structures is challenging due to their low inter-regional contrast and high curvature. MRI studies of the adult brain have successfully applied deep learning techniques to segment limbic structures, and similar deep learning models are being leveraged for infant studies. However, these deep learning-based infant MRI segmentation models have generally been derived from small datasets, and may suffer from generalization problems. Moreover, the accuracy of segmentations derived from these deep learning models relative to more standard Expectation-Maximization approaches has not been characterized. To address these challenges, we leveraged a large, public infant MRI dataset (n = 473) and the transfer-learning technique to first pre-train a deep convolutional neural network model on two limbic structures: amygdala and hippocampus. Then we used a leave-one-out cross-validation strategy to fine-tune the pre-trained model and evaluated it separately on two independent datasets with manual labels. We term this new approach the Infant Deep learning SEGmentation Framework (ID-Seg). ID-Seg performed well on both datasets with a mean dice similarity score (DSC) of 0.87, a mean intra-class correlation (ICC) of 0.93, and a mean average surface distance (ASD) of 0.31 mm. Compared to the Developmental Human Connectome pipeline (dHCP) pipeline, ID-Seg significantly improved segmentation accuracy. In a third infant MRI dataset (n = 50), we used ID-Seg and dHCP separately to estimate amygdala and hippocampus volumes and shapes. The estimates derived from ID-seg, relative to those from the dHCP, showed stronger associations with behavioral problems assessed in these infants at age 2. In sum, ID-Seg consistently performed well on two different datasets with an 0.87 DSC, however, multi-site testing and extension for brain regions beyond the amygdala and hippocampus are still needed.
Collapse
Affiliation(s)
- Yun Wang
- Department of Psychiatry and Behavioral Sciences, Duke University, Durham, NC, USA.,New York State Psychiatric Institute, New York, NY, USA
| | | | - Xuzhe Zhang
- Department of Biomedical Engineering, Columbia University, New York, NY, USA
| | | | | | | | - Natalie Aw
- New York State Psychiatric Institute, New York, NY, USA
| | - Alexis Maddocks
- Department of Radiology, Columbia University, New York, NY, USA
| | | | - Catherine Monk
- New York State Psychiatric Institute, New York, NY, USA.,Department of Obstetrics and Gynecology, Columbia University, New York, NY, USA
| | - Andrew Laine
- Department of Biomedical Engineering, Columbia University, New York, NY, USA
| | - Jonathan Posner
- Department of Psychiatry and Behavioral Sciences, Duke University, Durham, NC, USA. .,New York State Psychiatric Institute, New York, NY, USA.
| | | |
Collapse
|
41
|
Manjón JV, Romero JE, Vivo-Hernando R, Rubio G, Aparici F, de la Iglesia-Vaya M, Coupé P. vol2Brain: A New Online Pipeline for Whole Brain MRI Analysis. Front Neuroinform 2022; 16:862805. [PMID: 35685943 PMCID: PMC9171328 DOI: 10.3389/fninf.2022.862805] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 04/07/2022] [Indexed: 11/13/2022] Open
Abstract
Automatic and reliable quantitative tools for MR brain image analysis are a very valuable resource for both clinical and research environments. In the past few years, this field has experienced many advances with successful techniques based on label fusion and more recently deep learning. However, few of them have been specifically designed to provide a dense anatomical labeling at the multiscale level and to deal with brain anatomical alterations such as white matter lesions (WML). In this work, we present a fully automatic pipeline (vol2Brain) for whole brain segmentation and analysis, which densely labels (N > 100) the brain while being robust to the presence of WML. This new pipeline is an evolution of our previous volBrain pipeline that extends significantly the number of regions that can be analyzed. Our proposed method is based on a fast and multiscale multi-atlas label fusion technology with systematic error correction able to provide accurate volumetric information in a few minutes. We have deployed our new pipeline within our platform volBrain (www.volbrain.upv.es), which has been already demonstrated to be an efficient and effective way to share our technology with the users worldwide.
Collapse
Affiliation(s)
- José V. Manjón
- Instituto de Aplicaciones de las Tecnologías de la Información y de las Comunicaciones Avanzadas (ITACA), Universitat Politècnica de València, Valencia, Spain
- *Correspondence: José V. Manjón
| | - José E. Romero
- Instituto de Aplicaciones de las Tecnologías de la Información y de las Comunicaciones Avanzadas (ITACA), Universitat Politècnica de València, Valencia, Spain
| | - Roberto Vivo-Hernando
- Instituto de Automática e Informática Industrial, Universitat Politècnica de València, Valencia, Spain
| | - Gregorio Rubio
- Departamento de Matemática Aplicada, Universitat Politècnica de València, Valencia, Spain
| | - Fernando Aparici
- Área de Imagen Medica, Hospital Universitario y Politécnico La Fe, Valencia, Spain
| | - Mariam de la Iglesia-Vaya
- Unidad Mixta de Imagen Biomédica FISABIO-CIPF, Fundación Para el Fomento de la Investigación Sanitario y Biomédica de la Comunidad Valenciana, Valencia, Spain
- Centro de Investigación Biomédica en Red de Salud Mental, ISC III, València, Spain
| | - Pierrick Coupé
- Centre National de la Recherche Scientifique, Univ. Bordeaux, Bordeaux INP, Laboratoire Bordelais de Recherche en Informatique, UMR5800, PICTURA, Talence, France
| |
Collapse
|
42
|
Jung C, Abuhamad M, Mohaisen D, Han K, Nyang D. WBC image classification and generative models based on convolutional neural network. BMC Med Imaging 2022; 22:94. [PMID: 35596153 PMCID: PMC9121596 DOI: 10.1186/s12880-022-00818-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 05/06/2022] [Indexed: 11/25/2022] Open
Abstract
Background Computer-aided methods for analyzing white blood cells (WBC) are popular due to the complexity of the manual alternatives. Recent works have shown highly accurate segmentation and detection of white blood cells from microscopic blood images. However, the classification of the observed cells is still a challenge, in part due to the distribution of the five types that affect the condition of the immune system. Methods (i) This work proposes W-Net, a CNN-based method for WBC classification. We evaluate W-Net on a real-world large-scale dataset that includes 6562 real images of the five WBC types. (ii) For further benefits, we generate synthetic WBC images using Generative Adversarial Network to be used for education and research purposes through sharing. Results (i) W-Net achieves an average accuracy of 97%. In comparison to state-of-the-art methods in the field of WBC classification, we show that W-Net outperforms other CNN- and RNN-based model architectures. Moreover, we show the benefits of using pre-trained W-Net in a transfer learning context when fine-tuned to specific task or accommodating another dataset. (ii) The synthetic WBC images are confirmed by experiments and a domain expert to have a high degree of similarity to the original images. The pre-trained W-Net and the generated WBC dataset are available for the community to facilitate reproducibility and follow up research work. Conclusion This work proposed W-Net, a CNN-based architecture with a small number of layers, to accurately classify the five WBC types. We evaluated W-Net on a real-world large-scale dataset and addressed several challenges such as the transfer learning property and the class imbalance. W-Net achieved an average classification accuracy of 97%. We synthesized a dataset of new WBC image samples using DCGAN, which we released to the public for education and research purposes.
Collapse
Affiliation(s)
- Changhun Jung
- Department of Cyber Security, Ewha Womans University, 52, Ewhayeodae-gil, Seodaemun-gu, Seoul, 03760, Republic of Korea
| | - Mohammed Abuhamad
- Department of Computer Science, Loyola University Chicago, 1032 W Sheridan Rd, Chicago, 60660, USA
| | - David Mohaisen
- Department of Computer Science, University of Central Florida, 4000 Central Florida Blvd, Orlando, FL, 32816, USA
| | - Kyungja Han
- Department of Laboratory Medicine and College of Medicine, The Catholic University of Korea Seoul St. Mary's Hospital, 222, Banpo-daero, Seocho-gu, Seoul, 06591, Republic of Korea
| | - DaeHun Nyang
- Department of Cyber Security, Ewha Womans University, 52, Ewhayeodae-gil, Seodaemun-gu, Seoul, 03760, Republic of Korea.
| |
Collapse
|
43
|
Enhanced Pre-Processing for Deep Learning in MRI Whole Brain Segmentation using Orthogonal Moments. BRAIN MULTIPHYSICS 2022. [DOI: 10.1016/j.brain.2022.100049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
|
44
|
Henschel L, Kügler D, Reuter M. FastSurferVINN: Building resolution-independence into deep learning segmentation methods-A solution for HighRes brain MRI. Neuroimage 2022; 251:118933. [PMID: 35122967 PMCID: PMC9801435 DOI: 10.1016/j.neuroimage.2022.118933] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 12/22/2021] [Accepted: 01/24/2022] [Indexed: 01/04/2023] Open
Abstract
Leading neuroimaging studies have pushed 3T MRI acquisition resolutions below 1.0 mm for improved structure definition and morphometry. Yet, only few, time-intensive automated image analysis pipelines have been validated for high-resolution (HiRes) settings. Efficient deep learning approaches, on the other hand, rarely support more than one fixed resolution (usually 1.0 mm). Furthermore, the lack of a standard submillimeter resolution as well as limited availability of diverse HiRes data with sufficient coverage of scanner, age, diseases, or genetic variance poses additional, unsolved challenges for training HiRes networks. Incorporating resolution-independence into deep learning-based segmentation, i.e., the ability to segment images at their native resolution across a range of different voxel sizes, promises to overcome these challenges, yet no such approach currently exists. We now fill this gap by introducing a Voxel-size Independent Neural Network (VINN) for resolution-independent segmentation tasks and present FastSurferVINN, which (i) establishes and implements resolution-independence for deep learning as the first method simultaneously supporting 0.7-1.0 mm whole brain segmentation, (ii) significantly outperforms state-of-the-art methods across resolutions, and (iii) mitigates the data imbalance problem present in HiRes datasets. Overall, internal resolution-independence mutually benefits both HiRes and 1.0 mm MRI segmentation. With our rigorously validated FastSurferVINN we distribute a rapid tool for morphometric neuroimage analysis. The VINN architecture, furthermore, represents an efficient resolution-independent segmentation method for wider application.
Collapse
Affiliation(s)
- Leonie Henschel
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - David Kügler
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Martin Reuter
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany; A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
45
|
Xiao D, Le TTG, Doan TT, Le BT. Coal identification based on a deep network and reflectance spectroscopy. SPECTROCHIMICA ACTA. PART A, MOLECULAR AND BIOMOLECULAR SPECTROSCOPY 2022; 270:120859. [PMID: 35033804 DOI: 10.1016/j.saa.2022.120859] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 12/16/2021] [Accepted: 01/02/2022] [Indexed: 06/14/2023]
Abstract
The rapid identification of coal types in the field is an important task. This research combines spectroscopy with deep learning algorithms and proposes a method for quickly identifying coal types in the field. First, we collect field spectral data of various coals and preprocess the spectra. Then, a coal identification model that uses a convolutional neural network in combination with an extreme learning machine is proposed. The two-dimensional spectral features of coal are extracted through the convolutional neural network, and the extreme learning machine is used as a classifier to identify the features. To further improve the identification performance of the model, we use the whale optimization algorithm to optimize the parameters of the model. The experimental results show that the proposed method can quickly and accurately identify types of coal. It provides a low-cost, convenient, and effective method for the rapid identification of coal in the field.
Collapse
Affiliation(s)
- Dong Xiao
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China; Key Laboratory of Intelligent Diagnosis and Safety for Metallurgical Industry, Liaoning Province, Northeastern University, Shenyang 110819, China
| | - Thi Tra Giang Le
- Training Department, Institute of Science and Technology, Hoang Sam 100000, Ha Noi, Viet Nam
| | | | - Ba Tuan Le
- Control, Automation in Production and Improvement of Technology Institute (CAPITI), Hanoi, 100000, Viet Nam.
| |
Collapse
|
46
|
Yang Q, Hansen CB, Cai LY, Rheault F, Lee HH, Bao S, Chandio BQ, Williams O, Resnick SM, Garyfallidis E, Anderson AW, Descoteaux M, Schilling KG, Landman BA. Learning white matter subject-specific segmentation from structural MRI. Med Phys 2022; 49:2502-2513. [PMID: 35090192 PMCID: PMC9053869 DOI: 10.1002/mp.15495] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Revised: 12/20/2021] [Accepted: 01/10/2022] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Mapping brain white matter (WM) is essential for building an understanding of brain anatomy and function. Tractography-based methods derived from diffusion-weighted MRI (dMRI) are the principal tools for investigating WM. These procedures rely on time-consuming dMRI acquisitions that may not always be available, especially for legacy or time-constrained studies. To address this problem, we aim to generate WM tracts from structural magnetic resonance imaging (MRI) image by deep learning. METHODS Following recently proposed innovations in structural anatomical segmentation, we evaluate the feasibility of training multiply spatial localized convolution neural networks to learn context from fixed spatial patches from structural MRI on standard template. We focus on six widely used dMRI tractography algorithms (TractSeg, RecoBundles, XTRACT, Tracula, automated fiber quantification (AFQ), and AFQclipped) and train 125 U-Net models to learn these techniques from 3870 T1-weighted images from the Baltimore Longitudinal Study of Aging, the Human Connectome Project S1200 release, and scans acquired at Vanderbilt University. RESULTS The proposed framework identifies fiber bundles with high agreement against tractography-based pathways with a median Dice coefficient from 0.62 to 0.87 on a test cohort, achieving improved subject-specific accuracy when compared to population atlas-based methods. We demonstrate the generalizability of the proposed framework on three externally available datasets. CONCLUSIONS We show that patch-wise convolutional neural network can achieve robust bundle segmentation from T1w. We envision the use of this framework for visualizing the expected course of WM pathways when dMRI is not available.
Collapse
Affiliation(s)
- Qi Yang
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Colin B. Hansen
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Leon Y. Cai
- Department of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - Francois Rheault
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - Ho Hin Lee
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Shunxing Bao
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - Bramsh Qamar Chandio
- Department of Intelligent Systems Engineering, Indiana University, Bloomington, Indiana, USA
| | - Owen Williams
- Laboratory of Behavioral Neuroscience, National Institute on Aging, Baltimore, Maryland, USA
| | - Susan M. Resnick
- Laboratory of Behavioral Neuroscience, National Institute on Aging, Baltimore, Maryland, USA
| | - Eleftherios Garyfallidis
- Department of Intelligent Systems Engineering, Indiana University, Bloomington, Indiana, USA,Program of Neuroscience, Indiana University, Bloomington, Indiana, USA
| | - Adam W. Anderson
- Department of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee, USA,Vanderbilt University Institute of Imaging Science, Vanderbilt University Medical Centre, Nashville, Tennessee, USA
| | - Maxime Descoteaux
- Sherbrooke Connectivity Imaging Laboratory (SCIL), Université de Sherbrooke, Sherbrooke, Canada
| | - Kurt G. Schilling
- Vanderbilt University Institute of Imaging Science, Vanderbilt University Medical Centre, Nashville, Tennessee, USA,Department of Radiology and Radiological Sciences, Vanderbilt University Medical Centre, Nashville, Tennessee, USA
| | - Bennett A. Landman
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA,Department of Biomedical Engineering, Vanderbilt University, Nashville, Tennessee, USA,Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, Tennessee, USA,Vanderbilt University Institute of Imaging Science, Vanderbilt University Medical Centre, Nashville, Tennessee, USA,Department of Radiology and Radiological Sciences, Vanderbilt University Medical Centre, Nashville, Tennessee, USA
| |
Collapse
|
47
|
Deep 3D Neural Network for Brain Structures Segmentation Using Self-Attention Modules in MRI Images. SENSORS 2022; 22:s22072559. [PMID: 35408173 PMCID: PMC9002763 DOI: 10.3390/s22072559] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 03/15/2022] [Accepted: 03/21/2022] [Indexed: 01/03/2023]
Abstract
In recent years, the use of deep learning-based models for developing advanced healthcare systems has been growing due to the results they can achieve. However, the majority of the proposed deep learning-models largely use convolutional and pooling operations, causing a loss in valuable data and focusing on local information. In this paper, we propose a deep learning-based approach that uses global and local features which are of importance in the medical image segmentation process. In order to train the architecture, we used extracted three-dimensional (3D) blocks from the full magnetic resonance image resolution, which were sent through a set of successive convolutional neural network (CNN) layers free of pooling operations to extract local information. Later, we sent the resulting feature maps to successive layers of self-attention modules to obtain the global context, whose output was later dispatched to the decoder pipeline composed mostly of upsampling layers. The model was trained using the Mindboggle-101 dataset. The experimental results showed that the self-attention modules allow segmentation with a higher Mean Dice Score of 0.90 ± 0.036 compared with other UNet-based approaches. The average segmentation time was approximately 0.038 s per brain structure. The proposed model allows tackling the brain structure segmentation task properly. Exploiting the global context that the self-attention modules incorporate allows for more precise and faster segmentation. We segmented 37 brain structures and, to the best of our knowledge, it is the largest number of structures under a 3D approach using attention mechanisms.
Collapse
|
48
|
A survey of deep learning methods for multiple sclerosis identification using brain MRI images. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07099-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
49
|
The Application of Wavelet Transform of Raman Spectra to Facilitate Transfer Learning for Gasoline Detection and Classification. TALANTA OPEN 2022. [DOI: 10.1016/j.talo.2022.100106] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023] Open
|
50
|
Parida PK, Dora L, Swain M, Agrawal S, Panda R. Data science methodologies in smart healthcare: a review. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00648-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|