1
|
Xu X, Bu Q, Xie J, Li H, Xu F, Li J. On-site burn severity assessment using smartphone-captured color burn wound images. Comput Biol Med 2024; 182:109171. [PMID: 39362001 DOI: 10.1016/j.compbiomed.2024.109171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Revised: 09/05/2024] [Accepted: 09/17/2024] [Indexed: 10/05/2024]
Abstract
Accurate assessment of burn severity is crucial for the management of burn injuries. Currently, clinicians mainly rely on visual inspection to assess burns, characterized by notable inter-observer discrepancies. In this study, we introduce an innovative analysis platform using color burn wound images for automatic burn severity assessment. To do this, we propose a novel joint-task deep learning model, which is capable of simultaneously segmenting both burn regions and body parts, the two crucial components in calculating the percentage of total body surface area (%TBSA). Asymmetric attention mechanism is introduced, allowing attention guidance from the body part segmentation task to the burn region segmentation task. A user-friendly mobile application is developed to facilitate a fast assessment of burn severity at clinical settings. The proposed framework was evaluated on a dataset comprising 1340 color burn wound images captured on-site at clinical settings. The average Dice coefficients for burn depth segmentation and body part segmentation are 85.12 % and 85.36 %, respectively. The R2 for %TBSA assessment is 0.9136. The source codes for the joint-task framework and the application are released on Github (https://github.com/xjtu-mia/BurnAnalysis). The proposed platform holds the potential to be widely used at clinical settings to facilitate a fast and precise burn assessment.
Collapse
Affiliation(s)
- Xiayu Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China; Bioinspired Engineering and Biomechanics Center (BEBC), Xi'an Jiaotong University, Xi'an, 710049, China.
| | - Qilong Bu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China; Bioinspired Engineering and Biomechanics Center (BEBC), Xi'an Jiaotong University, Xi'an, 710049, China
| | - Jingmeng Xie
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China; Bioinspired Engineering and Biomechanics Center (BEBC), Xi'an Jiaotong University, Xi'an, 710049, China
| | - Hang Li
- Department of Burns and Plastic Surgery, Tangdu Hospital, The Air Force Military Medical University, Xi'an, 710038, Shaanxi, China
| | - Feng Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, 710049, China; Bioinspired Engineering and Biomechanics Center (BEBC), Xi'an Jiaotong University, Xi'an, 710049, China
| | - Jing Li
- Department of Burns and Plastic Surgery, Tangdu Hospital, The Air Force Military Medical University, Xi'an, 710038, Shaanxi, China.
| |
Collapse
|
2
|
Cao S, Hu Z, Xie X, Wang Y, Yu J, Yang B, Shi Z, Wu G. Integrated diagnosis of glioma based on magnetic resonance images with incomplete ground truth labels. Comput Biol Med 2024; 180:108968. [PMID: 39106670 DOI: 10.1016/j.compbiomed.2024.108968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Revised: 07/26/2024] [Accepted: 07/29/2024] [Indexed: 08/09/2024]
Abstract
BACKGROUND Since the 2016 WHO guidelines, glioma diagnosis has entered an era of integrated diagnosis, combining tissue pathology and molecular pathology. The WHO has focused on promoting the application of molecular diagnosis in the classification of central nervous system tumors. Genetic information such as IDH1 and 1p/19q are important molecular markers, and pathological grading is also a key clinical indicator. However, obtaining genetic pathology labels is more costly than conventional MRI images, resulting in a large number of missing labels in realistic modeling. METHOD We propose a training strategy based on label encoding and a corresponding loss function to enable the model to effectively utilize data with missing labels. Additionally, we integrate a graph model with genes and pathology-related clinical prior knowledge into the ResNet backbone to further improve the efficacy of diagnosis. Ten-fold cross-validation experiments were conducted on a large dataset of 1072 patients. RESULTS The classification area under the curve (AUC) values are 0.93, 0.91, and 0.90 for IDH1, 1p/19q status, and grade (LGG/HGG), respectively. When the label miss rate reached 59.3 %, the method improved the AUC by 0.09, 0.10, and 0.04 for IDH1, 1p/19q, and pathological grade, respectively, compared to the same backbone without the missing label strategy. CONCLUSIONS Our method effectively utilizes data with missing labels and integrates clinical prior knowledge, resulting in improved diagnostic performance for glioma genetic and pathological markers, even with high rates of missing labels.
Collapse
Affiliation(s)
- Shiwen Cao
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Zhaoyu Hu
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Xuan Xie
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Yuanyuan Wang
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Jinhua Yu
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Bojie Yang
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China.
| | - Zhifeng Shi
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China.
| | - Guoqing Wu
- School of Information Science and Technology, Fudan University, Shanghai, China.
| |
Collapse
|
3
|
D S CS, Christopher Clement J. Enhancing brain tumor segmentation in MRI images using the IC-net algorithm framework. Sci Rep 2024; 14:15660. [PMID: 38977779 PMCID: PMC11231217 DOI: 10.1038/s41598-024-66314-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 07/01/2024] [Indexed: 07/10/2024] Open
Abstract
Brain tumors, often referred to as intracranial tumors, are abnormal tissue masses that arise from rapidly multiplying cells. During medical imaging, it is essential to separate brain tumors from healthy tissue. The goal of this paper is to improve the accuracy of separating tumorous regions from healthy tissues in medical imaging, specifically for brain tumors in MRI images which is difficult in the field of medical image analysis. In our research work, we propose IC-Net (Inverted-C), a novel semantic segmentation architecture that combines elements from various models to provide effective and precise results. The architecture includes Multi-Attention (MA) blocks, Feature Concatenation Networks (FCN), Attention-blocks which performs crucial tasks in improving brain tumor segmentation. MA-block aggregates multi-attention features to adapt to different tumor sizes and shapes. Attention-block is focusing on key regions, resulting in more effective segmentation in complex images. FCN-block captures diverse features, making the model more robust to various characteristics of brain tumor images. Our proposed architecture is used to accelerate the training process and also to address the challenges posed by the diverse nature of brain tumor images, ultimately leads to potentially improved segmentation performance. IC-Net significantly outperforms the typical U-Net architecture and other contemporary effective segmentation techniques. On the BraTS 2020 dataset, our IC-Net design obtained notable outcomes in Accuracy, Loss, Specificity, Sensitivity as 99.65, 0.0159, 99.44, 99.86 and DSC (core, whole, and enhancing tumors as 0.998717, 0.888930, 0.866183) respectively.
Collapse
Affiliation(s)
- Chandra Sekaran D S
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, 632014, India
| | - J Christopher Clement
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, 632014, India.
| |
Collapse
|
4
|
Li Y, El Habib Daho M, Conze PH, Zeghlache R, Le Boité H, Tadayoni R, Cochener B, Lamard M, Quellec G. A review of deep learning-based information fusion techniques for multimodal medical image classification. Comput Biol Med 2024; 177:108635. [PMID: 38796881 DOI: 10.1016/j.compbiomed.2024.108635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 03/18/2024] [Accepted: 05/18/2024] [Indexed: 05/29/2024]
Abstract
Multimodal medical imaging plays a pivotal role in clinical diagnosis and research, as it combines information from various imaging modalities to provide a more comprehensive understanding of the underlying pathology. Recently, deep learning-based multimodal fusion techniques have emerged as powerful tools for improving medical image classification. This review offers a thorough analysis of the developments in deep learning-based multimodal fusion for medical classification tasks. We explore the complementary relationships among prevalent clinical modalities and outline three main fusion schemes for multimodal classification networks: input fusion, intermediate fusion (encompassing single-level fusion, hierarchical fusion, and attention-based fusion), and output fusion. By evaluating the performance of these fusion techniques, we provide insight into the suitability of different network architectures for various multimodal fusion scenarios and application domains. Furthermore, we delve into challenges related to network architecture selection, handling incomplete multimodal data management, and the potential limitations of multimodal fusion. Finally, we spotlight the promising future of Transformer-based multimodal fusion techniques and give recommendations for future research in this rapidly evolving field.
Collapse
Affiliation(s)
- Yihao Li
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | - Mostafa El Habib Daho
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France.
| | | | - Rachid Zeghlache
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | - Hugo Le Boité
- Sorbonne University, Paris, France; Ophthalmology Department, Lariboisière Hospital, AP-HP, Paris, France
| | - Ramin Tadayoni
- Ophthalmology Department, Lariboisière Hospital, AP-HP, Paris, France; Paris Cité University, Paris, France
| | - Béatrice Cochener
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France; Ophthalmology Department, CHRU Brest, Brest, France
| | - Mathieu Lamard
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | | |
Collapse
|
5
|
Yu D, Zhong Q, Xiao Y, Feng Z, Tang F, Feng S, Cai Y, Gao Y, Lan T, Li M, Yu F, Wang Z, Gao X, Li Z. Combination of MRI-based prediction and CRISPR/Cas12a-based detection for IDH genotyping in glioma. NPJ Precis Oncol 2024; 8:140. [PMID: 38951603 PMCID: PMC11217299 DOI: 10.1038/s41698-024-00632-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 05/30/2024] [Indexed: 07/03/2024] Open
Abstract
Early identification of IDH mutation status is of great significance in clinical therapeutic decision-making in the treatment of glioma. We demonstrate a technological solution to improve the accuracy and reliability of IDH mutation detection by combining MRI-based prediction and a CRISPR-based automatic integrated gene detection system (AIGS). A model was constructed to predict the IDH mutation status using whole slices in MRI scans with a Transformer neural network, and the predictive model achieved accuracies of 0.93, 0.87, and 0.84 using the internal and two external test sets, respectively. Additionally, CRISPR/Cas12a-based AIGS was constructed, and AIGS achieved 100% diagnostic accuracy in terms of IDH detection using both frozen tissue and FFPE samples in one hour. Moreover, the feature attribution of our predictive model was assessed using GradCAM, and the highest correlations with tumor cell percentages in enhancing and IDH-wildtype gliomas were found to have GradCAM importance (0.65 and 0.5, respectively). This MRI-based predictive model could, therefore, guide biopsy for tumor-enriched, which would ensure the veracity and stability of the rapid detection results. The combination of our predictive model and AIGS improved the early determination of IDH mutation status in glioma patients. This combined system of MRI-based prediction and CRISPR/Cas12a-based detection can be used to guide biopsy, resection, and radiation for glioma patients to improve patient outcomes.
Collapse
Affiliation(s)
- Donghu Yu
- Brain Glioma Center & Department of Neurosurgery, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Qisheng Zhong
- Department of Neurosurgery, 960 Hospital of PLA, Jinan, Shandong, China
| | - Yilei Xiao
- Department of Neurosurgery, Liaocheng People's Hospital, Liaocheng, China
| | - Zhebin Feng
- Department of Neurosurgery, PLA General Hospital, Beijing, China
| | - Feng Tang
- Brain Glioma Center & Department of Neurosurgery, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Shiyu Feng
- Department of Neurosurgery, PLA General Hospital, Beijing, China
| | - Yuxiang Cai
- Department of Pathology, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Yutong Gao
- Department of Prosthodontics, Wuhan University Hospital of Stomatology, Wuhan, China
| | - Tian Lan
- Brain Glioma Center & Department of Neurosurgery, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Mingjun Li
- Department of Radiology, Liaocheng People's Hospital, Liaocheng, China
| | - Fuhua Yu
- Department of Neurosurgery, Liaocheng People's Hospital, Liaocheng, China
| | - Zefen Wang
- Department of Physiology, Wuhan University School of Basic Medical Sciences, Wuhan, China.
| | - Xu Gao
- Department of Neurosurgery, General Hospital of Northern Theater Command, Shenyang, China.
| | - Zhiqiang Li
- Brain Glioma Center & Department of Neurosurgery, Zhongnan Hospital of Wuhan University, Wuhan, China.
| |
Collapse
|
6
|
Zhu J, Bolsterlee B, Chow BVY, Song Y, Meijering E. Hybrid dual mean-teacher network with double-uncertainty guidance for semi-supervised segmentation of magnetic resonance images. Comput Med Imaging Graph 2024; 115:102383. [PMID: 38643551 DOI: 10.1016/j.compmedimag.2024.102383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 03/26/2024] [Accepted: 04/14/2024] [Indexed: 04/23/2024]
Abstract
Semi-supervised learning has made significant progress in medical image segmentation. However, existing methods primarily utilize information from a single dimensionality, resulting in sub-optimal performance on challenging magnetic resonance imaging (MRI) data with multiple segmentation objects and anisotropic resolution. To address this issue, we present a Hybrid Dual Mean-Teacher (HD-Teacher) model with hybrid, semi-supervised, and multi-task learning to achieve effective semi-supervised segmentation. HD-Teacher employs a 2D and a 3D mean-teacher network to produce segmentation labels and signed distance fields from the hybrid information captured in both dimensionalities. This hybrid mechanism allows HD-Teacher to utilize features from 2D, 3D, or both dimensions as needed. Outputs from 2D and 3D teacher models are dynamically combined based on confidence scores, forming a single hybrid prediction with estimated uncertainty. We propose a hybrid regularization module to encourage both student models to produce results close to the uncertainty-weighted hybrid prediction to further improve their feature extraction capability. Extensive experiments of binary and multi-class segmentation conducted on three MRI datasets demonstrated that the proposed framework could (1) significantly outperform state-of-the-art semi-supervised methods (2) surpass a fully-supervised VNet trained on substantially more annotated data, and (3) perform on par with human raters on muscle and bone segmentation task. Code will be available at https://github.com/ThisGame42/Hybrid-Teacher.
Collapse
Affiliation(s)
- Jiayi Zhu
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia; Neuroscience Research Australia (NeuRA), Randwick, NSW 2031, Australia.
| | - Bart Bolsterlee
- Neuroscience Research Australia (NeuRA), Randwick, NSW 2031, Australia; Graduate School of Biomedical Engineering, University of New South Wales, Sydney, NSW 2052, Australia
| | - Brian V Y Chow
- Neuroscience Research Australia (NeuRA), Randwick, NSW 2031, Australia; School of Biomedical Sciences, University of New South Wales, Sydney, NSW 2052, Australia
| | - Yang Song
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia
| |
Collapse
|
7
|
Kuang H, Wang Y, Liu J, Wang J, Cao Q, Hu B, Qiu W, Wang J. Hybrid CNN-Transformer Network With Circular Feature Interaction for Acute Ischemic Stroke Lesion Segmentation on Non-Contrast CT Scans. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2303-2316. [PMID: 38319756 DOI: 10.1109/tmi.2024.3362879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Lesion segmentation is a fundamental step for the diagnosis of acute ischemic stroke (AIS). Non-contrast CT (NCCT) is still a mainstream imaging modality for AIS lesion measurement. However, AIS lesion segmentation on NCCT is challenging due to low contrast, noise and artifacts. To achieve accurate AIS lesion segmentation on NCCT, this study proposes a hybrid convolutional neural network (CNN) and Transformer network with circular feature interaction and bilateral difference learning. It consists of parallel CNN and Transformer encoders, a circular feature interaction module, and a shared CNN decoder with a bilateral difference learning module. A new Transformer block is particularly designed to solve the weak inductive bias problem of the traditional Transformer. To effectively combine features from CNN and Transformer encoders, we first design a multi-level feature aggregation module to combine multi-scale features in each encoder and then propose a novel feature interaction module containing circular CNN-to-Transformer and Transformer-to-CNN interaction blocks. Besides, a bilateral difference learning module is proposed at the bottom level of the decoder to learn the different information between the ischemic and contralateral sides of the brain. The proposed method is evaluated on three AIS datasets: the public AISD, a private dataset and an external dataset. Experimental results show that the proposed method achieves Dices of 61.39% and 46.74% on the AISD and the private dataset, respectively, outperforming 17 state-of-the-art segmentation methods. Besides, volumetric analysis on segmented lesions and external validation results imply that the proposed method is potential to provide support information for AIS diagnosis.
Collapse
|
8
|
Wang KN, Li SX, Bu Z, Zhao FX, Zhou GQ, Zhou SJ, Chen Y. SBCNet: Scale and Boundary Context Attention Dual-Branch Network for Liver Tumor Segmentation. IEEE J Biomed Health Inform 2024; 28:2854-2865. [PMID: 38427554 DOI: 10.1109/jbhi.2024.3370864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2024]
Abstract
Automated segmentation of liver tumors in CT scans is pivotal for diagnosing and treating liver cancer, offering a valuable alternative to labor-intensive manual processes and ensuring the provision of accurate and reliable clinical assessment. However, the inherent variability of liver tumors, coupled with the challenges posed by blurred boundaries in imaging characteristics, presents a substantial obstacle to achieving their precise segmentation. In this paper, we propose a novel dual-branch liver tumor segmentation model, SBCNet, to address these challenges effectively. Specifically, our proposed method introduces a contextual encoding module, which enables a better identification of tumor variability using an advanced multi-scale adaptive kernel. Moreover, a boundary enhancement module is designed for the counterpart branch to enhance the perception of boundaries by incorporating contour learning with the Sobel operator. Finally, we propose a hybrid multi-task loss function, concurrently concerning tumors' scale and boundary features, to foster interaction across different tasks of dual branches, further improving tumor segmentation. Experimental validation on the publicly available LiTS dataset demonstrates the practical efficacy of each module, with SBCNet yielding competitive results compared to other state-of-the-art methods for liver tumor segmentation.
Collapse
|
9
|
Fan L, Gong X, Zheng C, Li J. Data pyramid structure for optimizing EUS-based GISTs diagnosis in multi-center analysis with missing label. Comput Biol Med 2024; 169:107897. [PMID: 38171262 DOI: 10.1016/j.compbiomed.2023.107897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 12/04/2023] [Accepted: 12/23/2023] [Indexed: 01/05/2024]
Abstract
This study introduces the Data Pyramid Structure (DPS) to address data sparsity and missing labels in medical image analysis. The DPS optimizes multi-task learning and enables sustainable expansion of multi-center data analysis. Specifically, It facilitates attribute prediction and malignant tumor diagnosis tasks by implementing a segmentation and aggregation strategy on data with absent attribute labels. To leverage multi-center data, we propose the Unified Ensemble Learning Framework (UELF) and the Unified Federated Learning Framework (UFLF), which incorporate strategies for data transfer and incremental learning in scenarios with missing labels. The proposed method was evaluated on a challenging EUS patient dataset from five centers, achieving promising diagnostic performance. The average accuracy was 0.984 with an AUC of 0.927 for multi-center analysis, surpassing state-of-the-art approaches. The interpretability of the predictions further highlights the potential clinical relevance of our method.
Collapse
Affiliation(s)
- Lin Fan
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan 611756, China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, China; National Engineering Laboratory of Integrated Transportation Big Data Application Technology, China
| | - Xun Gong
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan 611756, China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, China; National Engineering Laboratory of Integrated Transportation Big Data Application Technology, China.
| | - Cenyang Zheng
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan 611756, China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, China; National Engineering Laboratory of Integrated Transportation Big Data Application Technology, China
| | - Jiao Li
- Department of Gastroenterology, The Third People's Hospital of Chendu, Affiliated Hospital of Southwest Jiaotong University, Chengdu 610031, China
| |
Collapse
|
10
|
Zhang X, Zhang G, Qiu X, Yin J, Tan W, Yin X, Yang H, Wang H, Zhang Y. Non-invasive decision support for clinical treatment of non-small cell lung cancer using a multiscale radiomics approach. Radiother Oncol 2024; 191:110082. [PMID: 38195018 DOI: 10.1016/j.radonc.2024.110082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 12/01/2023] [Accepted: 01/02/2024] [Indexed: 01/11/2024]
Abstract
BACKGROUND Selecting therapeutic strategies for cancer patients is typically based on key target-molecule biomarkers that play an important role in cancer onset, progression, and prognosis. Thus, there is a pressing need for novel biomarkers that can be utilized longitudinally to guide treatment selection. METHODS Using data from 508 non-small cell lung cancer (NSCLC) patients across three institutions, we developed and validated a comprehensive predictive biomarker that distinguishes six genotypes and infiltrative immune phenotypes. These features were analyzed to establish the association between radiological phenotypes and tumor genotypes/immune phenotypes and to create a radiological interpretation of molecular features. In addition, we assessed the sensitivity of the models by evaluating their performance at five different voxel intervals, resulting in improved generalizability of the proposed approach. FINDINGS The radiomics model we developed, which integrates clinical factors and multi-regional features, outperformed the conventional model that only uses clinical and intratumoral features. Our combined model showed significant performance for EGFR, KRAS, ALK, TP53, PIK3CA, and ROS1 mutation status with AUCs of 0.866, 0.874, 0.902, 0.850, 0.860, and 0.900, respectively. Additionally, the predictive performance for PD-1/PD-L1 was 0.852. Although the performance of all models decreased to different degrees at five different voxel space resolutions, the performance advantage of the combined model did not change. CONCLUSIONS We validated multiscale radiomic signatures across tumor genotypes and immunophenotypes in a multi-institutional cohort. This imaging-based biomarker offers a non-invasive approach to select patients with NSCLC who are sensitive to targeted therapies or immunotherapy, which is promising for developing personalized treatment strategies during therapy.
Collapse
Affiliation(s)
- Xingping Zhang
- School of Medical Information Engineering, Gannan Medical University, 341000, Ganzhou, China; Cyberspace Institute of Advanced Technology, Guangzhou University, 510006 Guangzhou, China; Institute for Sustainable Industries and Liveable Cities, Victoria University, 3011, Melbourne, Australia; Department of New Networks, Peng Cheng Laboratory, 518000, Shenzhen, China
| | - Guijuan Zhang
- Department of Respiratory and Critical Care, First Affiliated Hospital of Gannan Medical University, 341000, Ganzhou, China
| | - Xingting Qiu
- Department of Radiology, First Affiliated Hospital of Gannan Medical University, 341000, Ganzhou, China
| | - Jiao Yin
- Institute for Sustainable Industries and Liveable Cities, Victoria University, 3011, Melbourne, Australia
| | - Wenjun Tan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, 110189, Shenyang, China
| | - Xiaoxia Yin
- Cyberspace Institute of Advanced Technology, Guangzhou University, 510006 Guangzhou, China
| | - Hong Yang
- Cyberspace Institute of Advanced Technology, Guangzhou University, 510006 Guangzhou, China
| | - Hua Wang
- Institute for Sustainable Industries and Liveable Cities, Victoria University, 3011, Melbourne, Australia.
| | - Yanchun Zhang
- Institute for Sustainable Industries and Liveable Cities, Victoria University, 3011, Melbourne, Australia; School of Computer Science and Technology, Zhejiang Normal University, 321000, Jinhua, China; Department of New Networks, Peng Cheng Laboratory, 518000, Shenzhen, China.
| |
Collapse
|
11
|
Guo R, Tian X, Lin H, McKenna S, Li HD, Guo F, Liu J. Graph-Based Fusion of Imaging, Genetic and Clinical Data for Degenerative Disease Diagnosis. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:57-68. [PMID: 37991907 DOI: 10.1109/tcbb.2023.3335369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2023]
Abstract
Graph learning methods have achieved noteworthy performance in disease diagnosis due to their ability to represent unstructured information such as inter-subject relationships. While it has been shown that imaging, genetic and clinical data are crucial for degenerative disease diagnosis, existing methods rarely consider how best to use their relationships. How best to utilize information from imaging, genetic and clinical data remains a challenging problem. This study proposes a novel graph-based fusion (GBF) approach to meet this challenge. To extract effective imaging-genetic features, we propose an imaging-genetic fusion module which uses an attention mechanism to obtain modality-specific and joint representations within and between imaging and genetic data. Then, considering the effectiveness of clinical information for diagnosing degenerative diseases, we propose a multi-graph fusion module to further fuse imaging-genetic and clinical features, which adopts a learnable graph construction strategy and a graph ensemble method. Experimental results on two benchmarks for degenerative disease diagnosis (Alzheimers Disease Neuroimaging Initiative and Parkinson's Progression Markers Initiative) demonstrate its effectiveness compared to state-of-the-art graph-based methods. Our findings should help guide further development of graph-based models for dealing with imaging, genetic and clinical data.
Collapse
|
12
|
Osman YBM, Li C, Huang W, Wang S. Sparse annotation learning for dense volumetric MR image segmentation with uncertainty estimation. Phys Med Biol 2023; 69:015009. [PMID: 38035374 DOI: 10.1088/1361-6560/ad111b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 11/30/2023] [Indexed: 12/02/2023]
Abstract
Objective.Training neural networks for pixel-wise or voxel-wise image segmentation is a challenging task that requires a considerable amount of training samples with highly accurate and densely delineated ground truth maps. This challenge becomes especially prominent in the medical imaging domain, where obtaining reliable annotations for training samples is a difficult, time-consuming, and expert-dependent process. Therefore, developing models that can perform well under the conditions of limited annotated training data is desirable.Approach.In this study, we propose an innovative framework called the extremely sparse annotation neural network (ESA-Net) that learns with only the single central slice label for 3D volumetric segmentation which explores both intra-slice pixel dependencies and inter-slice image correlations with uncertainty estimation. Specifically, ESA-Net consists of four specially designed distinct components: (1) an intra-slice pixel dependency-guided pseudo-label generation module that exploits uncertainty in network predictions while generating pseudo-labels for unlabeled slices with temporal ensembling; (2) an inter-slice image correlation-constrained pseudo-label propagation module which propagates labels from the labeled central slice to unlabeled slices by self-supervised registration with rotation ensembling; (3) a pseudo-label fusion module that fuses the two sets of generated pseudo-labels with voxel-wise uncertainty guidance; and (4) a final segmentation network optimization module to make final predictions with scoring-based label quantification.Main results.Extensive experimental validations have been performed on two popular yet challenging magnetic resonance image segmentation tasks and compared to five state-of-the-art methods.Significance.Results demonstrate that our proposed ESA-Net can consistently achieve better segmentation performances even under the extremely sparse annotation setting, highlighting its effectiveness in exploiting information from unlabeled data.
Collapse
Affiliation(s)
- Yousuf Babiker M Osman
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, People's Republic of China
| | - Weijian Huang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China
- Peng Cheng Laboratory, Shenzhen 518066, People's Republic of China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, People's Republic of China
- Peng Cheng Laboratory, Shenzhen 518066, People's Republic of China
| |
Collapse
|
13
|
Chen Z, Peng C, Guo W, Xie L, Wang S, Zhuge Q, Wen C, Feng Y. Uncertainty-guided transformer for brain tumor segmentation. Med Biol Eng Comput 2023; 61:3289-3301. [PMID: 37665558 DOI: 10.1007/s11517-023-02899-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 06/07/2023] [Indexed: 09/05/2023]
Abstract
Multi-model data can enhance brain tumor segmentation for the rich information it provides. However, it also introduces some redundant information that interferes with the segmentation estimation, as some modalities may catch features irrelevant to the tissue of interest. Besides, the ambiguous boundaries and irregulate shapes of different grade tumors lead to a non-confidence estimate of segmentation quality. Given these concerns, we exploit an uncertainty-guided U-shaped transformer with multiple heads to construct drop-out format masks for robust training. Specifically, our drop-out masks are composed of boundary mask, prior probability mask, and conditional probability mask, which can help our approach focus more on uncertainty regions. Extensive experimental results show that our method achieves comparable or higher results than previous state-of-the-art brain tumor segmentation methods, achieving average dice coefficients of [Formula: see text] and Hausdorff distance of 4.91 on the BraTS2021 dataset. Our code is freely available at https://github.com/chaineypung/BTS-UGT.
Collapse
Affiliation(s)
- Zan Chen
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Chenxu Peng
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Wenlong Guo
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Lei Xie
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, SIAT, CAS Shenzhen, 518055, China
| | - Qichuan Zhuge
- First Affilated Hospital of Wenzhou Medical University, Wenzhou, 325000, China
| | - Caiyun Wen
- First Affilated Hospital of Wenzhou Medical University, Wenzhou, 325000, China
| | - Yuanjing Feng
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310023, China.
- Zhejiang Provincial United Key Laboratory of Embedded Systems, Hangzhou, 310023, China.
| |
Collapse
|
14
|
Zhang X, Cheng G, Han X, Li S, Xiong J, Wu Z, Zhang H, Chen D. Deep learning-based multi-stage postoperative type-b aortic dissection segmentation using global-local fusion learning. Phys Med Biol 2023; 68:235011. [PMID: 37774717 DOI: 10.1088/1361-6560/acfec7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 09/29/2023] [Indexed: 10/01/2023]
Abstract
Objective.Type-b aortic dissection (AD) is a life-threatening cardiovascular disease and the primary treatment is thoracic endovascular aortic repair (TEVAR). Due to the lack of a rapid and accurate segmentation technique, the patient-specific postoperative AD model is unavailable in clinical practice, resulting in impracticable 3D morphological and hemodynamic analyses during TEVAR assessment. This work aims to construct a deep learning-based segmentation framework for postoperative type-b AD.Approach.The segmentation is performed in a two-stage manner. A multi-class segmentation of the contrast-enhanced aorta, thrombus (TH), and branch vessels (BV) is achieved in the first stage based on the cropped image patches. True lumen (TL) and false lumen (FL) are extracted from a straightened image containing the entire aorta in the second stage. A global-local fusion learning mechanism is designed to improve the segmentation of TH and BR by compensating for the missing contextual features of the cropped images in the first stage.Results.The experiments are conducted on a multi-center dataset comprising 133 patients with 306 follow-up images. Our framework achieves the state-of-the-art dice similarity coefficient (DSC) of 0.962, 0.921, 0.811, and 0.884 for TL, FL, TH, and BV, respectively. The global-local fusion learning mechanism increases the DSC of TH and BV by 2.3% (p< 0.05) and 1.4% (p< 0.05), respectively, based on the baseline. Segmenting TH in stage 1 can achieve significantly better DSC for FL (0.921 ± 0.055 versus 0.857 ± 0.220,p< 0.01) and TH (0.811 ± 0.137 versus 0.797 ± 0.146,p< 0.05) than in stage 2. Our framework supports more accurate vascular volume quantifications compared with previous segmentation model, especially for the patients with enlarged TH+FL after TEVAR, and shows good generalizability to different hospital settings.Significance.Our framework can quickly provide accurate patient-specific AD models, supporting the clinical practice of 3D morphological and hemodynamic analyses for quantitative and more comprehensive patient-specific TEVAR assessments.
Collapse
Affiliation(s)
- Xuyang Zhang
- School of Medical Technology, Beijing Institute of Technology, Beijing, People's Republic of China
| | - Guoliang Cheng
- School of Medical Technology, Beijing Institute of Technology, Beijing, People's Republic of China
| | - Xiaofeng Han
- Department of Diagnostic and Interventional Radiology, Beijing Anzhen Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Shilong Li
- School of Medical Technology, Beijing Institute of Technology, Beijing, People's Republic of China
| | - Jiang Xiong
- Department of Vascular and Endovascular Surgery, Chinese PLA General Hospital, Beijing, People's Republic of China
| | - Ziheng Wu
- Department of Vascular Surgery, The First Affiliated Hospital, Zhejiang University, Hangzhou, People's Republic of China
| | - Hongkun Zhang
- Department of Vascular Surgery, The First Affiliated Hospital, Zhejiang University, Hangzhou, People's Republic of China
| | - Duanduan Chen
- School of Medical Technology, Beijing Institute of Technology, Beijing, People's Republic of China
| |
Collapse
|
15
|
Ali H, Qureshi R, Shah Z. Artificial Intelligence-Based Methods for Integrating Local and Global Features for Brain Cancer Imaging: Scoping Review. JMIR Med Inform 2023; 11:e47445. [PMID: 37976086 PMCID: PMC10692876 DOI: 10.2196/47445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 07/02/2023] [Accepted: 07/12/2023] [Indexed: 11/19/2023] Open
Abstract
BACKGROUND Transformer-based models are gaining popularity in medical imaging and cancer imaging applications. Many recent studies have demonstrated the use of transformer-based models for brain cancer imaging applications such as diagnosis and tumor segmentation. OBJECTIVE This study aims to review how different vision transformers (ViTs) contributed to advancing brain cancer diagnosis and tumor segmentation using brain image data. This study examines the different architectures developed for enhancing the task of brain tumor segmentation. Furthermore, it explores how the ViT-based models augmented the performance of convolutional neural networks for brain cancer imaging. METHODS This review performed the study search and study selection following the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. The search comprised 4 popular scientific databases: PubMed, Scopus, IEEE Xplore, and Google Scholar. The search terms were formulated to cover the interventions (ie, ViTs) and the target application (ie, brain cancer imaging). The title and abstract for study selection were performed by 2 reviewers independently and validated by a third reviewer. Data extraction was performed by 2 reviewers and validated by a third reviewer. Finally, the data were synthesized using a narrative approach. RESULTS Of the 736 retrieved studies, 22 (3%) were included in this review. These studies were published in 2021 and 2022. The most commonly addressed task in these studies was tumor segmentation using ViTs. No study reported early detection of brain cancer. Among the different ViT architectures, Shifted Window transformer-based architectures have recently become the most popular choice of the research community. Among the included architectures, UNet transformer and TransUNet had the highest number of parameters and thus needed a cluster of as many as 8 graphics processing units for model training. The brain tumor segmentation challenge data set was the most popular data set used in the included studies. ViT was used in different combinations with convolutional neural networks to capture both the global and local context of the input brain imaging data. CONCLUSIONS It can be argued that the computational complexity of transformer architectures is a bottleneck in advancing the field and enabling clinical transformations. This review provides the current state of knowledge on the topic, and the findings of this review will be helpful for researchers in the field of medical artificial intelligence and its applications in brain cancer.
Collapse
Affiliation(s)
- Hazrat Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Rizwan Qureshi
- Department of Imaging Physics, MD Anderson Cancer Center, University of Texas, Houston, Houston, TX, United States
| | - Zubair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
16
|
Chen Q, Wang L, Xing Z, Wang L, Hu X, Wang R, Zhu YM. Deep wavelet scattering orthogonal fusion network for glioma IDH mutation status prediction. Comput Biol Med 2023; 166:107493. [PMID: 37774558 DOI: 10.1016/j.compbiomed.2023.107493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 06/26/2023] [Accepted: 09/15/2023] [Indexed: 10/01/2023]
Abstract
Accurately predicting the isocitrate dehydrogenase (IDH) mutation status of gliomas is greatly significant for formulating appropriate treatment plans and evaluating the prognoses of gliomas. Although existing studies can accurately predict the IDH mutation status of gliomas based on multimodal magnetic resonance (MR) images and machine learning methods, most of these methods cannot fully explore multimodal information and effectively predict IDH status for datasets acquired from multiple centers. To address this issue, a novel wavelet scattering (WS)-based orthogonal fusion network (WSOFNet) was proposed in this work to predict the IDH mutation status of gliomas from multiple centers. First, transformation-invariant features were extracted from multimodal MR images with a WS network, and then the multimodal WS features were used instead of the original images as the inputs of WSOFNet and were fully fused through an adaptive multimodal feature fusion module (AMF2M) and an orthogonal projection module (OPM). Finally, the fused features were input into a fully connected classifier to predict IDH mutation status. In addition, to achieve improved prediction accuracy, four auxiliary losses were also used in the feature extraction modules. The comparison results showed that the prediction area under the curve (AUC) of WSOFNet on a single-center dataset was 0.9966 and that on a multicenter dataset was approximately 0.9655, which was at least 3.9% higher than that of state-of-the-art methods. Moreover, the ablation experimental results also proved that the adaptive multimodal feature fusion strategy based on orthogonal projection could effectively improve the prediction performance of the model, especially for an external validation dataset.
Collapse
Affiliation(s)
- Qijian Chen
- Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Lihui Wang
- Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China.
| | - Zhiyang Xing
- Department of Radiology, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, NHC Key Laboratory of Pulmonary Immune-related Diseases, Guizhou Provincial People's Hospital, Guiyang, 550002, China
| | - Li Wang
- Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Xubin Hu
- Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Rongpin Wang
- Department of Radiology, International Exemplary Cooperation Base of Precision Imaging for Diagnosis and Treatment, NHC Key Laboratory of Pulmonary Immune-related Diseases, Guizhou Provincial People's Hospital, Guiyang, 550002, China
| | - Yue-Min Zhu
- University Lyon, INSA Lyon, CNRS, Inserm, IRP Metislab CREATIS UMR5220, U1206, Lyon 69621, France
| |
Collapse
|
17
|
Li Y, Zheng K, Li S, Yi Y, Li M, Ren Y, Guo C, Zhong L, Yang W, Li X, Yao L. A transformer-based multi-task deep learning model for simultaneous infiltrated brain area identification and segmentation of gliomas. Cancer Imaging 2023; 23:105. [PMID: 37891702 PMCID: PMC10612240 DOI: 10.1186/s40644-023-00615-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 09/20/2023] [Indexed: 10/29/2023] Open
Abstract
BACKGROUND The anatomical infiltrated brain area and the boundaries of gliomas have a significant impact on clinical decision making and available treatment options. Identifying glioma-infiltrated brain areas and delineating the tumor manually is a laborious and time-intensive process. Previous deep learning-based studies have mainly been focused on automatic tumor segmentation or predicting genetic/histological features. However, few studies have specifically addressed the identification of infiltrated brain areas. To bridge this gap, we aim to develop a model that can simultaneously identify infiltrated brain areas and perform accurate segmentation of gliomas. METHODS We have developed a transformer-based multi-task deep learning model that can perform two tasks simultaneously: identifying infiltrated brain areas segmentation of gliomas. The multi-task model leverages shaped location and boundary information to enhance the performance of both tasks. Our retrospective study involved 354 glioma patients (grades II-IV) with single or multiple brain area infiltrations, which were divided into training (N = 270), validation (N = 30), and independent test (N = 54) sets. We evaluated the predictive performance using the area under the receiver operating characteristic curve (AUC) and Dice scores. RESULTS Our multi-task model achieved impressive results in the independent test set, with an AUC of 94.95% (95% CI, 91.78-97.58), a sensitivity of 87.67%, a specificity of 87.31%, and accuracy of 87.41%. Specifically, for grade II-IV glioma, the model achieved AUCs of 95.25% (95% CI, 91.09-98.23, 84.38% sensitivity, 89.04% specificity, 87.62% accuracy), 98.26% (95% CI, 95.22-100, 93.75% sensitivity, 98.15% specificity, 97.14% accuracy), and 93.83% (95%CI, 86.57-99.12, 92.00% sensitivity, 85.71% specificity, 87.37% accuracy) respectively for the identification of infiltrated brain areas. Moreover, our model achieved a mean Dice score of 87.60% for the whole tumor segmentation. CONCLUSIONS Experimental results show that our multi-task model achieved superior performance and outperformed the state-of-the-art methods. The impressive performance demonstrates the potential of our work as an innovative solution for identifying tumor-infiltrated brain areas and suggests that it can be a practical tool for supporting clinical decision making.
Collapse
Affiliation(s)
- Yin Li
- Department of Information, The Sixth Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Kaiyi Zheng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, China
| | - Shuang Li
- Department of General Practice, The Sixth Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Yongju Yi
- Department of Information, The Sixth Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Min Li
- Department of Radiology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yufan Ren
- Department of Radiology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Congyue Guo
- Department of Radiology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Liming Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, China
| | - Xinming Li
- Department of Radiology, Zhujiang Hospital, Southern Medical University, Guangzhou, China.
| | - Lin Yao
- Department of General Practice, The Sixth Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China.
| |
Collapse
|
18
|
Kuang H, Wang Y, Liang Y, Liu J, Wang J. BEA-Net: Body and Edge Aware Network With Multi-Scale Short-Term Concatenation for Medical Image Segmentation. IEEE J Biomed Health Inform 2023; 27:4828-4839. [PMID: 37578920 DOI: 10.1109/jbhi.2023.3304662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Abstract
Medical image segmentation is indispensable for diagnosis and prognosis of many diseases. To improve the segmentation performance, this study proposes a new 2D body and edge aware network with multi-scale short-term concatenation for medical image segmentation. Multi-scale short-term concatenation modules which concatenate successive convolution layers with different receptive fields, are proposed for capturing multi-scale representations with fewer parameters. Body generation modules with feature adjustment based on weight map computing via enlarging the receptive fields, and edge generation modules with multi-scale convolutions using Sobel kernels for edge detection, are proposed to separately learn body and edge features from convolutional features in decoders, making the proposed network be body and edge aware. Based on the body and edge modules, we design parallel body and edge decoders whose outputs are fused to achieve the final segmentation. Besides, deep supervision from the body and edge decoders is applied to ensure the effectiveness of the generated body and edge features and further improve the final segmentation. The proposed method is trained and evaluated on six public medical image segmentation datasets to show its effectiveness and generality. Experimental results show that the proposed method achieves better average Dice similarity coefficient and 95% Hausdorff distance than several benchmarks on all used datasets. Ablation studies validate the effectiveness of the proposed multi-scale representation learning modules, body and edge generation modules and deep supervision.
Collapse
|
19
|
Shu Y, Xu W, Su R, Ran P, Liu L, Zhang Z, Zhao J, Chao Z, Fu G. Clinical applications of radiomics in non-small cell lung cancer patients with immune checkpoint inhibitor-related pneumonitis. Front Immunol 2023; 14:1251645. [PMID: 37799725 PMCID: PMC10547882 DOI: 10.3389/fimmu.2023.1251645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 08/24/2023] [Indexed: 10/07/2023] Open
Abstract
Immune checkpoint inhibitors (ICIs) modulate the body's immune function to treat tumors but may also induce pneumonitis. Immune checkpoint inhibitor-related pneumonitis (ICIP) is a serious immune-related adverse event (irAE). Immunotherapy is currently approved as a first-line treatment for non-small cell lung cancer (NSCLC), and the incidence of ICIP in NSCLC patients can be as high as 5%-19% in clinical practice. ICIP can be severe enough to lead to the death of NSCLC patients, but there is a lack of a gold standard for the diagnosis of ICIP. Radiomics is a method that uses computational techniques to analyze medical images (e.g., CT, MRI, PET) and extract important features from them, which can be used to solve classification and regression problems in the clinic. Radiomics has been applied to predict and identify ICIP in NSCLC patients in the hope of transforming clinical qualitative problems into quantitative ones, thus improving the diagnosis and treatment of ICIP. In this review, we summarize the pathogenesis of ICIP and the process of radiomics feature extraction, review the clinical application of radiomics in ICIP of NSCLC patients, and discuss its future application prospects.
Collapse
Affiliation(s)
- Yang Shu
- Department of Oncology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
- The Second Clinical Medical College, Shandong University of Traditional Chinese Medicine, Jinan, Shandong, China
| | - Wei Xu
- Department of Oncology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
- Department of Oncology, Shandong Provincial Hospital, Shandong University, Jinan, Shandong, China
| | - Rui Su
- College of Artificial Intelligence and Big Data for Medical Sciences, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, Shandong, China
| | - Pancen Ran
- Department of Oncology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
- The Second Clinical Medical College, Shandong University of Traditional Chinese Medicine, Jinan, Shandong, China
| | - Lei Liu
- Department of Oncology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Zhizhao Zhang
- Department of Oncology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Jing Zhao
- Department of Oncology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Zhen Chao
- College of Artificial Intelligence and Big Data for Medical Sciences, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, Shandong, China
| | - Guobin Fu
- Department of Oncology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, China
- The Second Clinical Medical College, Shandong University of Traditional Chinese Medicine, Jinan, Shandong, China
- Department of Oncology, Shandong Provincial Hospital, Shandong University, Jinan, Shandong, China
- Department of Oncology, The Third Affiliated Hospital of Shandong First Medical University, Jinan, Shandong, China
| |
Collapse
|
20
|
Liu Y, Wu M. Deep learning in precision medicine and focus on glioma. Bioeng Transl Med 2023; 8:e10553. [PMID: 37693051 PMCID: PMC10486341 DOI: 10.1002/btm2.10553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 04/13/2023] [Accepted: 05/08/2023] [Indexed: 09/12/2023] Open
Abstract
Deep learning (DL) has been successfully applied to different fields for a range of tasks. In medicine, DL methods have been also used to improve the efficiency of disease diagnosis. In this review, we first summarize the history of the development of artificial intelligence models, demonstrate the features of the subtypes of machine learning and different DL networks, and then explore their application in the different fields of precision medicine, such as cardiology, gastroenterology, ophthalmology, dermatology, and oncology. By digging more information and extracting multilevel features from medical data, we found that DL helps doctors assess diseases automatically and monitor patients' physical health. In gliomas, research regarding application prospect of DL was mainly shown through magnetic resonance imaging and then by pathological slides. However, multi-omics data, such as whole exome sequence, RNA sequence, proteomics, and epigenomics, have not been covered thus far. In general, the quality and quantity of DL datasets still need further improvements, and more fruitful multi-omics characteristics will bring more comprehensive and accurate diagnosis in precision medicine and glioma.
Collapse
Affiliation(s)
- Yihao Liu
- Hunan Key Laboratory of Cancer Metabolism, Hunan Cancer Hospital and the Affiliated Cancer Hospital of Xiangya School of MedicineCentral South UniversityChangshaHunanChina
- NHC Key Laboratory of Carcinogenesis, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Key Laboratory of Carcinogenesis and Cancer Invasion of the Chinese Ministry of Education, Cancer Research InstituteCentral South UniversityChangshaHunanChina
| | - Minghua Wu
- Hunan Key Laboratory of Cancer Metabolism, Hunan Cancer Hospital and the Affiliated Cancer Hospital of Xiangya School of MedicineCentral South UniversityChangshaHunanChina
- NHC Key Laboratory of Carcinogenesis, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Key Laboratory of Carcinogenesis and Cancer Invasion of the Chinese Ministry of Education, Cancer Research InstituteCentral South UniversityChangshaHunanChina
| |
Collapse
|
21
|
Li Y, Zhang Y, Liu JY, Wang K, Zhang K, Zhang GS, Liao XF, Yang G. Global Transformer and Dual Local Attention Network via Deep-Shallow Hierarchical Feature Fusion for Retinal Vessel Segmentation. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:5826-5839. [PMID: 35984806 DOI: 10.1109/tcyb.2022.3194099] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Clinically, retinal vessel segmentation is a significant step in the diagnosis of fundus diseases. However, recent methods generally neglect the difference of semantic information between deep and shallow features, which fail to capture the global and local characterizations in fundus images simultaneously, resulting in the limited segmentation performance for fine vessels. In this article, a global transformer (GT) and dual local attention (DLA) network via deep-shallow hierarchical feature fusion (GT-DLA-dsHFF) are investigated to solve the above limitations. First, the GT is developed to integrate the global information in the retinal image, which effectively captures the long-distance dependence between pixels, alleviating the discontinuity of blood vessels in the segmentation results. Second, DLA, which is constructed using dilated convolutions with varied dilation rates, unsupervised edge detection, and squeeze-excitation block, is proposed to extract local vessel information, consolidating the edge details in the segmentation result. Finally, a novel deep-shallow hierarchical feature fusion (dsHFF) algorithm is studied to fuse the features in different scales in the deep learning framework, respectively, which can mitigate the attenuation of valid information in the process of feature fusion. We verified the GT-DLA-dsHFF on four typical fundus image datasets. The experimental results demonstrate our GT-DLA-dsHFF achieves superior performance against the current methods and detailed discussions verify the efficacy of the proposed three modules. Segmentation results of diseased images show the robustness of our proposed GT-DLA-dsHFF. Implementation codes will be available on https://github.com/YangLibuaa/GT-DLA-dsHFF.
Collapse
|
22
|
Liu L, Chang J, Liu Z, Zhang P, Xu X, Shang H. Hybrid Contextual Semantic Network for Accurate Segmentation and Detection of Small-Size Stroke Lesions From MRI. IEEE J Biomed Health Inform 2023; 27:4062-4073. [PMID: 37155390 DOI: 10.1109/jbhi.2023.3273771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Stroke is a cerebrovascular disease with high mortality and disability rates. The occurrence of the stroke typically produces lesions of different sizes, with the accurate segmentation and detection of small-size stroke lesions being closely related to the prognosis of patients. However, the large lesions are usually correctly identified, the small-size lesions are usually ignored. This article provides a hybrid contextual semantic network (HCSNet) that can accurately and simultaneously segment and detect small-size stroke lesions from magnetic resonance images. HCSNet inherits the advantages of the encoder-decoder architecture and applies a novel hybrid contextual semantic module that generates high-quality contextual semantic features from the spatial and channel contextual semantic features through the skip connection layer. Moreover, a mixing-loss function is proposed to optimize HCSNet for unbalanced small-size lesions. HCSNet is trained and evaluated on 2D magnetic resonance images produced from the Anatomical Tracings of Lesions After Stroke challenge (ATLAS R2.0). Extensive experiments demonstrate that HCSNet outperforms several other state-of-the-art methods in its ability to segment and detect small-size stroke lesions. Visualization and ablation experiments reveal that the hybrid semantic module improves the segmentation and detection performance of HCSNet.
Collapse
|
23
|
Huang B, Zhang Y, Mao Q, Ju Y, Liu Y, Su Z, Lei Y, Ren Y. Deep learning-based prediction of H3K27M alteration in diffuse midline gliomas based on whole-brain MRI. Cancer Med 2023; 12:17139-17148. [PMID: 37461358 PMCID: PMC10501256 DOI: 10.1002/cam4.6363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 07/02/2023] [Accepted: 07/08/2023] [Indexed: 09/15/2023] Open
Abstract
BACKGROUND H3K27M mutation status significantly affects the prognosis of patients with diffuse midline gliomas (DMGs), but this tumor presents a high risk of pathological acquisition. We aimed to construct a fully automated model for predicting the H3K27M alteration status of DMGs based on deep learning using whole-brain MRI. METHODS DMG patients from West China Hospital of Sichuan University (WCHSU; n = 200) and Chengdu Shangjin Nanfu Hospital (CSNH; n = 35) who met the inclusion and exclusion criteria from February 2016 to April 2022 were enrolled as the training and external test sets, respectively. To adapt the model to the human head MRI scene, we use normal human head MR images to pretrain the model. The classification and tumor segmentation tasks are naturally related, so we conducted cotraining for the two tasks to enable information interaction between them and improve the accuracy of the classification task. RESULTS The average classification accuracies of our model on the training and external test sets was 90.5% and 85.1%, respectively. Ablation experiments showed that pretraining and cotraining could improve the prediction accuracy and generalization performance of the model. In the training and external test sets, the average areas under the receiver operating characteristic curve (AUROCs) were 94.18% and 87.64%, and the average areas under the precision-recall curve (AUPRC) were 93.26% and 85.4%. CONCLUSIONS The developed model achieved excellent performance in predicting the H3K27M alteration status in DMGs, and its good reproducibility and generalization were verified in the external dataset.
Collapse
Affiliation(s)
- Bowen Huang
- Department of NeurosurgeryWest China Hospital of Sichuan UniversityChengduChina
| | - Yuekang Zhang
- Department of NeurosurgeryWest China Hospital of Sichuan UniversityChengduChina
| | - Qing Mao
- Department of NeurosurgeryWest China Hospital of Sichuan UniversityChengduChina
| | - Yan Ju
- Department of NeurosurgeryWest China Hospital of Sichuan UniversityChengduChina
| | - Yanhui Liu
- Department of NeurosurgeryWest China Hospital of Sichuan UniversityChengduChina
| | - Zhengzheng Su
- Department of PathologyWest China Hospital of Sichuan UniversityChengduChina
| | - Yinjie Lei
- College of Electronics and Information EngineeringSichuan UniversityChengduChina
| | - Yanming Ren
- Department of NeurosurgeryWest China Hospital of Sichuan UniversityChengduChina
| |
Collapse
|
24
|
Shen DD, Bao SL, Wang Y, Chen YC, Zhang YC, Li XC, Ding YC, Jia ZZ. An automatic and accurate deep learning-based neuroimaging pipeline for the neonatal brain. Pediatr Radiol 2023; 53:1685-1697. [PMID: 36884052 DOI: 10.1007/s00247-023-05620-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 01/26/2023] [Accepted: 01/27/2023] [Indexed: 03/09/2023]
Abstract
BACKGROUND Accurate segmentation of neonatal brain tissues and structures is crucial for studying normal development and diagnosing early neurodevelopmental disorders. However, there is a lack of an end-to-end pipeline for automated segmentation and imaging analysis of the normal and abnormal neonatal brain. OBJECTIVE To develop and validate a deep learning-based pipeline for neonatal brain segmentation and analysis of structural magnetic resonance images (MRI). MATERIALS AND METHODS Two cohorts were enrolled in the study, including cohort 1 (582 neonates from the developing Human Connectome Project) and cohort 2 (37 neonates imaged using a 3.0-tesla MRI scanner in our hospital).We developed a deep leaning-based architecture capable of brain segmentation into 9 tissues and 87 structures. Then, extensive validations were performed for accuracy, effectiveness, robustness and generality of the pipeline. Furthermore, regional volume and cortical surface estimation were measured through in-house bash script implemented in FSL (Oxford Centre for Functional MRI of the Brain Software Library) to ensure reliability of the pipeline. Dice similarity score (DSC), the 95th percentile Hausdorff distance (H95) and intraclass correlation coefficient (ICC) were calculated to assess the quality of our pipeline. Finally, we finetuned and validated our pipeline on 2-dimensional thick-slice MRI in cohorts 1 and 2. RESULTS The deep learning-based model showed excellent performance for neonatal brain tissue and structural segmentation, with the best DSC and the 95th percentile Hausdorff distance (H95) of 0.96 and 0.99 mm, respectively. In terms of regional volume and cortical surface analysis, our model showed good agreement with ground truth. The ICC values for the regional volume were all above 0.80. Considering the thick-slice image pipeline, the same trend was observed for brain segmentation and analysis. The best DSC and H95 were 0.92 and 3.00 mm, respectively. The regional volumes and surface curvature had ICC values just below 0.80. CONCLUSIONS We propose an automatic, accurate, stable and reliable pipeline for neonatal brain segmentation and analysis from thin and thick structural MRI. The external validation showed very good reproducibility of the pipeline.
Collapse
Affiliation(s)
- Dan Dan Shen
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Shan Lei Bao
- Department of Nuclear Medicine, Affiliated Hospital and Medical School of Nantong University, Jiangsu, People's Republic of China
| | - Yan Wang
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Ying Chi Chen
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Yu Cheng Zhang
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Xing Can Li
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Yu Chen Ding
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Zhong Zheng Jia
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China.
| |
Collapse
|
25
|
Shi X, Li Y, Cheng J, Bai J, Zhao G, Chen YW. Multi-task Model for Glioma Segmentation and Isocitrate Dehydrogenase Status Prediction Using Global and Local Features. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083206 DOI: 10.1109/embc40787.2023.10340355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
According to the 2021 World Health Organization IDH status prediction scheme for gliomas, isocitrate dehydrogenase (IDH) is a particularly important basis for glioma diagnosis. In general, 3D multimodal brain MRI is an effective diagnostic tool. However, only using brain MRI data is difficult for experienced doctors to predict the IDH status. Surgery is necessary to be performed for confirming the IDH. Previous studies have shown that brain MRI images of glioma areas contain a lot of useful information for diagnosis. These studies usually need to mark the glioma area in advance to complete the prediction of IDH status, which takes a long time and has high computational cost. The tumor segmentation task model can automatically segment and locate the tumor region, which is exactly the information needed for the IDH prediction task. In this study, we proposed a multi-task deep learning model using 3D multimodal brain MRI images to achieve glioma segmentation and IDH status prediction simultaneously, which improved the accuracy of both tasks effectively. Firstly, we used a segmentation model to segment the tumor region. Also, the whole MRI image and the segmented glioma region features as the global and local features were used to predict IDH status. The effectiveness of the proposed method was validated via a public glioma dataset from the BraTS2020. Our experimental results show that our proposed method outperformed state-of-the-art methods with a prediction accuracy of 88.5% and average dice of 79.8%. The improvements in prediction and segmentation are 3% and 1% compared with the state-of-the-art method, respectively.
Collapse
|
26
|
Cui J, Miao X, Yanghao X, Qin X. Bibliometric research on the developments of artificial intelligence in radiomics toward nervous system diseases. Front Neurol 2023; 14:1171167. [PMID: 37360350 PMCID: PMC10288367 DOI: 10.3389/fneur.2023.1171167] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 05/16/2023] [Indexed: 06/28/2023] Open
Abstract
Background The growing interest suggests that the widespread application of radiomics has facilitated the development of neurological disease diagnosis, prognosis, and classification. The application of artificial intelligence methods in radiomics has increasingly achieved outstanding prediction results in recent years. However, there are few studies that have systematically analyzed this field through bibliometrics. Our destination is to study the visual relationships of publications to identify the trends and hotspots in radiomics research and encourage more researchers to participate in radiomics studies. Methods Publications in radiomics in the field of neurological disease research can be retrieved from the Web of Science Core Collection. Analysis of relevant countries, institutions, journals, authors, keywords, and references is conducted using Microsoft Excel 2019, VOSviewer, and CiteSpace V. We analyze the research status and hot trends through burst detection. Results On October 23, 2022, 746 records of studies on the application of radiomics in the diagnosis of neurological disorders were retrieved and published from 2011 to 2023. Approximately half of them were written by scholars in the United States, and most were published in Frontiers in Oncology, European Radiology, Cancer, and SCIENTIFIC REPORTS. Although China ranks first in the number of publications, the United States is the driving force in the field and enjoys a good academic reputation. NORBERT GALLDIKS and JIE TIAN published the most relevant articles, while GILLIES RJ was cited the most. RADIOLOGY is a representative and influential journal in the field. "Glioma" is a current attractive research hotspot. Keywords such as "machine learning," "brain metastasis," and "gene mutations" have recently appeared at the research frontier. Conclusion Most of the studies focus on clinical trial outcomes, such as the diagnosis, prediction, and prognosis of neurological disorders. The radiomics biomarkers and multi-omics studies of neurological disorders may soon become a hot topic and should be closely monitored, particularly the relationship between tumor-related non-invasive imaging biomarkers and the intrinsic micro-environment of tumors.
Collapse
|
27
|
Liu Z, Wei J, Li R, Zhou J. Learning multi-modal brain tumor segmentation from privileged semi-paired MRI images with curriculum disentanglement learning. Comput Biol Med 2023; 159:106927. [PMID: 37105113 DOI: 10.1016/j.compbiomed.2023.106927] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 04/02/2023] [Accepted: 04/13/2023] [Indexed: 04/29/2023]
Abstract
Since the brain is the human body's primary command and control center, brain cancer is one of the most dangerous cancers. Automatic segmentation of brain tumors from multi-modal images is important in diagnosis and treatment. Due to the difficulties in obtaining multi-modal paired images in clinical practice, recent studies segment brain tumors solely relying on unpaired images and discarding the available paired images. Although these models solve the dependence on paired images, they cannot fully exploit the complementary information from different modalities, resulting in low unimodal segmentation accuracy. Hence, this work studies the unimodal segmentation with privileged semi-paired images, i.e., limited paired images are introduced to the training phase. Specifically, we present a novel two-step (intra-modality and inter-modality) curriculum disentanglement learning framework. The modality-specific style codes describe the attenuation of tissue features and image contrast, and modality-invariant content codes contain anatomical and functional information extracted from the input images. Besides, we address the problem of unthorough decoupling by introducing constraints on the style and content spaces. Experiments on the BraTS2020 dataset highlight that our model outperforms the competing models on unimodal segmentation, achieving average dice scores of 82.91%, 72.62%, and 54.80% for WT (the whole tumor), TC (the tumor core), and ET (the enhancing tumor), respectively. Finally, we further evaluate our model's variable multi-modal brain tumor segmentation performance by introducing a fusion block (TFusion). The experimental results reveal that our model achieves the best WT segmentation performance for all 15 possible modality combinations with 87.31% average accuracy. In summary, we propose a curriculum disentanglement learning framework for unimodal segmentation with privileged semi-paired images. Moreover, the benefits of the improved unimodal segmentation extend to variable multi-modal segmentation, demonstrating that improving the unimodal segmentation performance is significant for brain tumor segmentation with missing modalities. Our code is available at https://github.com/scut-cszcl/SpBTS.
Collapse
Affiliation(s)
- Zecheng Liu
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China.
| | - Jia Wei
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China.
| | - Rui Li
- Golisano College of Computing and Information Sciences, Rochester Institute of Technology, Rochester, NY, USA.
| | - Jianlong Zhou
- Data Science Institute, University of Technology Sydney, Ultimo, NSW 2007, Australia.
| |
Collapse
|
28
|
He S, Chen W, Wang X, Xie X, Liu F, Ma X, Li X, Li A, Feng X. Deep learning radiomics-based preoperative prediction of recurrence in chronic rhinosinusitis. iScience 2023; 26:106527. [PMID: 37123223 PMCID: PMC10139989 DOI: 10.1016/j.isci.2023.106527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 01/11/2023] [Accepted: 03/27/2023] [Indexed: 05/02/2023] Open
Abstract
Chronic rhinosinusitis (CRS) is characterized by poor prognosis and propensity for recurrence even after surgery. Identification of those CRS patients with high risk of relapse preoperatively will contribute to personalized treatment recommendations. In this paper, we proposed a multi-task deep learning network for sinus segmentation and CRS recurrence prediction simultaneously to develop and validate a deep learning radiomics-based nomogram for preoperatively predicting recurrence in CRS patients who needed surgical treatment. 265 paranasal sinuses computed tomography (CT) images of CRS from two independent medical centers were analyzed to build and test models. The sinus segmentation model achieved good segmentation results. Furthermore, the nomogram combining a deep learning signature and clinical factors also showed excellent recurrence prediction ability for CRS. Our study not only facilitates a technique for sinus segmentation but also provides a noninvasive method for preoperatively predicting recurrence in patients with CRS.
Collapse
Affiliation(s)
- Shaojuan He
- Department of Otorhinolaryngology, Qilu Hospital of Shandong University, NHC Key Laboratory of Otorhinolaryngology (Shandong University), Jinan, China
- Department of Radiology, Qilu Hospital of Shandong University, Jinan, China
| | - Wei Chen
- School and Hospital of Stomatology, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Xuehai Wang
- Department of Otorhinolaryngology, Weihai Municipal Hospital, Weihai, China
| | - Xinyu Xie
- Department of Otorhinolaryngology, Qilu Hospital of Shandong University, NHC Key Laboratory of Otorhinolaryngology (Shandong University), Jinan, China
| | - Fangying Liu
- Department of Otorhinolaryngology, Qilu Hospital of Shandong University, NHC Key Laboratory of Otorhinolaryngology (Shandong University), Jinan, China
| | - Xinyi Ma
- Department of Otorhinolaryngology, Qilu Hospital of Shandong University, NHC Key Laboratory of Otorhinolaryngology (Shandong University), Jinan, China
| | - Xuezhong Li
- Department of Otorhinolaryngology, Qilu Hospital of Shandong University, NHC Key Laboratory of Otorhinolaryngology (Shandong University), Jinan, China
| | - Anning Li
- Department of Radiology, Qilu Hospital of Shandong University, Jinan, China
| | - Xin Feng
- Department of Otorhinolaryngology, Qilu Hospital of Shandong University, NHC Key Laboratory of Otorhinolaryngology (Shandong University), Jinan, China
| |
Collapse
|
29
|
Chen Y, Yue H, Kuang H, Wang J. RBS-Net: Hippocampus segmentation using multi-layer feature learning with the region, boundary and structure loss. Comput Biol Med 2023; 160:106953. [PMID: 37120987 DOI: 10.1016/j.compbiomed.2023.106953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 04/10/2023] [Accepted: 04/15/2023] [Indexed: 05/02/2023]
Abstract
Hippocampus has great influence over the Alzheimer's disease (AD) research because of its essential role as a biomarker in the human brain. Thus the performance of hippocampus segmentation influences the development of clinical research for brain disorders. Deep learning using U-net-like networks becomes prevalent in hippocampus segmentation on Magnetic Resonance Imaging (MRI) due to its efficiency and accuracy. However, current methods lose sufficient detailed information during pooling, which hinders the segmentation results. And weak supervision on the details like edges or positions results in fuzzy and coarse boundary segmentation, causing great differences between the segmentation and ground-truth. In view of these drawbacks, we propose a Region-Boundary and Structure Net (RBS-Net), which consists of a primary net and an auxiliary net. (1) Our primary net focuses on the region distribution of hippocampus and introduces a distance map for boundary supervision. Furthermore the primary net adds a multi-layer feature learning module to compensate the information loss during pooling and strengthen the differences between the foreground and background, improving the region and boundary segmentation. (2) The auxiliary net concentrates on the structure similarity and also utilizes the multi-layer feature learning module, and this parallel task can refine encoders by similarizing the structure of the segmentation and ground-truth. We train and test our network using 5-fold cross-validation on HarP, a public available hippocampus dataset. Experimental results demonstrate that our proposed RBS-Net achieves a Dice of 89.76% in average, outperforming several state-of-the-art hippocampus segmentation methods. Furthermore, in few shot circumstances, our proposed RBS-Net achieves better results in terms of a comprehensive evaluation compared to several state-of-the-art deep learning-based methods. Finally we can observe that visual segmentation results for the boundary and detailed regions are improved by our proposed RBS-Net.
Collapse
Affiliation(s)
- Yu Chen
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and Engineering, Central South University, Changsha, 410083, Hunan, China
| | - Hailin Yue
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and Engineering, Central South University, Changsha, 410083, Hunan, China
| | - Hulin Kuang
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and Engineering, Central South University, Changsha, 410083, Hunan, China
| | - Jianxin Wang
- Hunan Provincial Key Lab on Bioinformatics, School of Computer Science and Engineering, Central South University, Changsha, 410083, Hunan, China.
| |
Collapse
|
30
|
Cui C, Yang H, Wang Y, Zhao S, Asad Z, Coburn LA, Wilson KT, Landman BA, Huo Y. Deep multimodal fusion of image and non-image data in disease diagnosis and prognosis: a review. PROGRESS IN BIOMEDICAL ENGINEERING (BRISTOL, ENGLAND) 2023; 5:10.1088/2516-1091/acc2fe. [PMID: 37360402 PMCID: PMC10288577 DOI: 10.1088/2516-1091/acc2fe] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
The rapid development of diagnostic technologies in healthcare is leading to higher requirements for physicians to handle and integrate the heterogeneous, yet complementary data that are produced during routine practice. For instance, the personalized diagnosis and treatment planning for a single cancer patient relies on various images (e.g. radiology, pathology and camera images) and non-image data (e.g. clinical data and genomic data). However, such decision-making procedures can be subjective, qualitative, and have large inter-subject variabilities. With the recent advances in multimodal deep learning technologies, an increasingly large number of efforts have been devoted to a key question: how do we extract and aggregate multimodal information to ultimately provide more objective, quantitative computer-aided clinical decision making? This paper reviews the recent studies on dealing with such a question. Briefly, this review will include the (a) overview of current multimodal learning workflows, (b) summarization of multimodal fusion methods, (c) discussion of the performance, (d) applications in disease diagnosis and prognosis, and (e) challenges and future directions.
Collapse
Affiliation(s)
- Can Cui
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
| | - Haichun Yang
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
| | - Yaohong Wang
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
| | - Shilin Zhao
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
| | - Zuhayr Asad
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
| | - Lori A Coburn
- Division of Gastroenterology Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, United States of America
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN 37212, United States of America
| | - Keith T Wilson
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN 37215, United States of America
- Division of Gastroenterology Hepatology, and Nutrition, Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37232, United States of America
- Veterans Affairs Tennessee Valley Healthcare System, Nashville, TN 37212, United States of America
| | - Bennett A Landman
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, United States of America
| | - Yuankai Huo
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, United States of America
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, United States of America
| |
Collapse
|
31
|
Luo J, Pan M, Mo K, Mao Y, Zou D. Emerging role of artificial intelligence in diagnosis, classification and clinical management of glioma. Semin Cancer Biol 2023; 91:110-123. [PMID: 36907387 DOI: 10.1016/j.semcancer.2023.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 03/05/2023] [Accepted: 03/08/2023] [Indexed: 03/12/2023]
Abstract
Glioma represents a dominant primary intracranial malignancy in the central nervous system. Artificial intelligence that mainly includes machine learning, and deep learning computational approaches, presents a unique opportunity to enhance clinical management of glioma through improving tumor segmentation, diagnosis, differentiation, grading, treatment, prediction of clinical outcomes (prognosis, and recurrence), molecular features, clinical classification, characterization of the tumor microenvironment, and drug discovery. A growing body of recent studies apply artificial intelligence-based models to disparate data sources of glioma, covering imaging modalities, digital pathology, high-throughput multi-omics data (especially emerging single-cell RNA sequencing and spatial transcriptome), etc. While these early findings are promising, future studies are required to normalize artificial intelligence-based models to improve the generalizability and interpretability of the results. Despite prominent issues, targeted clinical application of artificial intelligence approaches in glioma will facilitate the development of precision medicine of this field. If these challenges can be overcome, artificial intelligence has the potential to profoundly change the way patients with or at risk of glioma are provided with more rational care.
Collapse
Affiliation(s)
- Jiefeng Luo
- Department of Neurology, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China
| | - Mika Pan
- Department of Neurology, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China
| | - Ke Mo
- Clinical Research Center, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China
| | - Yingwei Mao
- Department of Biology, Pennsylvania State University, University Park, PA 16802, USA.
| | - Donghua Zou
- Department of Neurology, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China; Clinical Research Center, The Second Affiliated Hospital of Guangxi Medical University, Nanning 530007, Guangxi, China.
| |
Collapse
|
32
|
Sha Y, Yan Q, Tan Y, Wang X, Zhang H, Yang G. Prediction of the Molecular Subtype of IDH Mutation Combined with MGMT Promoter Methylation in Gliomas via Radiomics Based on Preoperative MRI. Cancers (Basel) 2023; 15:cancers15051440. [PMID: 36900232 PMCID: PMC10001198 DOI: 10.3390/cancers15051440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 02/12/2023] [Accepted: 02/21/2023] [Indexed: 03/12/2023] Open
Abstract
BACKGROUND The molecular subtype of IDH mut combined with MGMT meth in gliomas suggests a good prognosis and potential benefit from TMZ chemotherapy. The aim of this study was to establish a radiomics model to predict this molecular subtype. METHOD The preoperative MR images and genetic data of 498 patients with gliomas were retrospectively collected from our institution and the TCGA/TCIA dataset. A total of 1702 radiomics features were extracted from the tumour region of interest (ROI) of CE-T1 and T2-FLAIR MR images. Least absolute shrinkage and selection operator (LASSO) and logistic regression were used for feature selection and model building. Receiver operating characteristic (ROC) curves and calibration curves were used to evaluate the predictive performance of the model. RESULTS Regarding clinical variables, age and tumour grade were significantly different between the two molecular subtypes in the training, test and independent validation cohorts (p < 0.05). The areas under the curve (AUCs) of the radiomics model based on 16 selected features in the SMOTE training cohort, un-SMOTE training cohort, test set and independent TCGA/TCIA validation cohort were 0.936, 0.932, 0.916 and 0.866, respectively, and the corresponding F1-scores were 0.860, 0.797, 0.880 and 0.802. The AUC of the independent validation cohort increased to 0.930 for the combined model when integrating the clinical risk factors and radiomics signature. CONCLUSIONS radiomics based on preoperative MRI can effectively predict the molecular subtype of IDH mut combined with MGMT meth.
Collapse
Affiliation(s)
- Yongjian Sha
- Department of Radiology, First Hospital of Shanxi Medical University, Taiyuan 030001, China
- Xi'an No.3 Hospital, Affiliated Hospital of Northwest University, Xi'an 710018, China
| | - Qianqian Yan
- Department of Radiology, First Hospital of Shanxi Medical University, Taiyuan 030001, China
| | - Yan Tan
- Department of Radiology, First Hospital of Shanxi Medical University, Taiyuan 030001, China
| | - Xiaochun Wang
- Department of Radiology, First Hospital of Shanxi Medical University, Taiyuan 030001, China
| | - Hui Zhang
- Department of Radiology, First Hospital of Shanxi Medical University, Taiyuan 030001, China
| | - Guoqiang Yang
- Department of Radiology, First Hospital of Shanxi Medical University, Taiyuan 030001, China
| |
Collapse
|
33
|
Zhao Y, Wang X, Che T, Bao G, Li S. Multi-task deep learning for medical image computing and analysis: A review. Comput Biol Med 2023; 153:106496. [PMID: 36634599 DOI: 10.1016/j.compbiomed.2022.106496] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 12/06/2022] [Accepted: 12/27/2022] [Indexed: 12/29/2022]
Abstract
The renaissance of deep learning has provided promising solutions to various tasks. While conventional deep learning models are constructed for a single specific task, multi-task deep learning (MTDL) that is capable to simultaneously accomplish at least two tasks has attracted research attention. MTDL is a joint learning paradigm that harnesses the inherent correlation of multiple related tasks to achieve reciprocal benefits in improving performance, enhancing generalizability, and reducing the overall computational cost. This review focuses on the advanced applications of MTDL for medical image computing and analysis. We first summarize four popular MTDL network architectures (i.e., cascaded, parallel, interacted, and hybrid). Then, we review the representative MTDL-based networks for eight application areas, including the brain, eye, chest, cardiac, abdomen, musculoskeletal, pathology, and other human body regions. While MTDL-based medical image processing has been flourishing and demonstrating outstanding performance in many tasks, in the meanwhile, there are performance gaps in some tasks, and accordingly we perceive the open challenges and the perspective trends. For instance, in the 2018 Ischemic Stroke Lesion Segmentation challenge, the reported top dice score of 0.51 and top recall of 0.55 achieved by the cascaded MTDL model indicate further research efforts in high demand to escalate the performance of current models.
Collapse
Affiliation(s)
- Yan Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia.
| | - Tongtong Che
- Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing, 100083, China
| | - Guoqing Bao
- School of Computer Science, The University of Sydney, Sydney, NSW, 2008, Australia
| | - Shuyu Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
34
|
Automated Collateral Scoring on CT Angiography of Patients with Acute Ischemic Stroke Using Hybrid CNN and Transformer Network. Biomedicines 2023; 11:biomedicines11020243. [PMID: 36830780 PMCID: PMC9953344 DOI: 10.3390/biomedicines11020243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 01/10/2023] [Accepted: 01/14/2023] [Indexed: 01/20/2023] Open
Abstract
Collateral scoring plays an important role in diagnosis and treatment decisions of acute ischemic stroke (AIS). Most existing automated methods rely on vessel prominence and amount after vessel segmentation. The purpose of this study was to design a vessel-segmentation free method for automating collateral scoring on CT angiography (CTA). We first processed the original CTA via maximum intensity projection (MIP) and middle cerebral artery (MCA) region segmentation. The obtained MIP images were fed into our proposed hybrid CNN and Transformer model (MPViT) to automatically determine the collateral scores. We collected 154 CTA scans of patients with AIS for evaluation using five-folder cross validation. Results show that the proposed MPViT achieved an intraclass correlation coefficient of 0.767 (95% CI: 0.68-0.83) and a Kappa of 0.6184 (95% CI: 0.4954-0.7414) for three-point collateral score classification. For dichotomized classification (good vs. non-good and poor vs. non-poor), it also achieved great performance.
Collapse
|
35
|
AGGN: Attention-based glioma grading network with multi-scale feature extraction and multi-modal information fusion. Comput Biol Med 2023; 152:106457. [PMID: 36571937 DOI: 10.1016/j.compbiomed.2022.106457] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 12/06/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
In this paper, a magnetic resonance imaging (MRI) oriented novel attention-based glioma grading network (AGGN) is proposed. By applying the dual-domain attention mechanism, both channel and spatial information can be considered to assign weights, which benefits highlighting the key modalities and locations in the feature maps. Multi-branch convolution and pooling operations are applied in a multi-scale feature extraction module to separately obtain shallow and deep features on each modality, and a multi-modal information fusion module is adopted to sufficiently merge low-level detailed and high-level semantic features, which promotes the synergistic interaction among different modality information. The proposed AGGN is comprehensively evaluated through extensive experiments, and the results have demonstrated the effectiveness and superiority of the proposed AGGN in comparison to other advanced models, which also presents high generalization ability and strong robustness. In addition, even without the manually labeled tumor masks, AGGN can present considerable performance as other state-of-the-art algorithms, which alleviates the excessive reliance on supervised information in the end-to-end learning paradigm.
Collapse
|
36
|
Xu Q, Xu QQ, Shi N, Dong LN, Zhu H, Xu K. A multitask classification framework based on vision transformer for predicting molecular expressions of glioma. Eur J Radiol 2022; 157:110560. [DOI: 10.1016/j.ejrad.2022.110560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 09/29/2022] [Accepted: 10/11/2022] [Indexed: 11/03/2022]
|
37
|
Cost Matrix of Molecular Pathology in Glioma-Towards AI-Driven Rational Molecular Testing and Precision Care for the Future. Biomedicines 2022; 10:biomedicines10123029. [PMID: 36551786 PMCID: PMC9775648 DOI: 10.3390/biomedicines10123029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 11/09/2022] [Accepted: 11/19/2022] [Indexed: 11/27/2022] Open
Abstract
Gliomas are the most common and aggressive primary brain tumors. Gliomas carry a poor prognosis because of the tumor's resistance to radiation and chemotherapy leading to nearly universal recurrence. Recent advances in large-scale genomic research have allowed for the development of more targeted therapies to treat glioma. While precision medicine can target specific molecular features in glioma, targeted therapies are often not feasible due to the lack of actionable markers and the high cost of molecular testing. This review summarizes the clinically relevant molecular features in glioma and the current cost of care for glioma patients, focusing on the molecular markers and meaningful clinical features that are linked to clinical outcomes and have a realistic possibility of being measured, which is a promising direction for precision medicine using artificial intelligence approaches.
Collapse
|
38
|
Mi C. Improving the Robustness of Loanword Identification in Social Media Texts. ACM T ASIAN LOW-RESO 2022. [DOI: 10.1145/3572773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
As a potential bilingual resource, loanwords play a very important role in many natural language processing tasks. If loanwords in a low-resource language can be identified effectively, the generated donor-receipt word pairs will benefit many cross-lingual NLP tasks. However, most studies on loanword identification mainly focus on formal texts such as news and government documents. Loanword identification in social media texts is still an under-studied field. Since it faces many challenges and can be widely used in several downstream tasks, more efforts should be put on loanword identification in social media texts. In this study, we present a multi-task learning architecture with deep bi-directional RNNs for loanword identification in social media texts, where different task supervision can happen at different layers. The multi-task neural network architecture learns higher order feature representations from word and character sequences along with basic spell error checking (SEC), part-of-speech (POS) tagging and named entity recognition (NER) information. Experimental results on Uyghur loanword identification in social media texts in five donor languages (Chinese, Arabic, Russian, Turkish, and Farsi) show that our method achieves the best performance compared with several strong baseline systems. We also combine the loanword detection results into the training data of neural machine translation for low-resource language pairs. Experiments show that models trained on the extended datasets achieve significant improvements compared with the baseline models in all language pairs.
Collapse
Affiliation(s)
- Chenggang Mi
- Foreign Language and Literature Institute, Xi’an International Studies University, China
| |
Collapse
|
39
|
Yan C, Ding C, Duan G. PMMS: Predicting essential miRNAs based on multi-head self-attention mechanism and sequences. Front Med (Lausanne) 2022; 9:1015278. [DOI: 10.3389/fmed.2022.1015278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 10/25/2022] [Indexed: 11/18/2022] Open
Abstract
Increasing evidence has proved that miRNA plays a significant role in biological progress. In order to understand the etiology and mechanisms of various diseases, it is necessary to identify the essential miRNAs. However, it is time-consuming and expensive to identify essential miRNAs by using traditional biological experiments. It is critical to develop computational methods to predict potential essential miRNAs. In this study, we provided a new computational method (called PMMS) to identify essential miRNAs by using multi-head self-attention and sequences. First, PMMS computes the statistic and structure features and extracts the static feature by concatenating them. Second, PMMS extracts the deep learning original feature (BiLSTM-based feature) by using bi-directional long short-term memory (BiLSTM) and pre-miRNA sequences. In addition, we further obtained the multi-head self-attention feature (MS-based feature) based on BiLSTM-based feature and multi-head self-attention mechanism. By considering the importance of the subsequence of pre-miRNA to the static feature of miRNA, we obtained the deep learning final feature (WA-based feature) based on the weighted attention mechanism. Finally, we concatenated WA-based feature and static feature as an input to the multilayer perceptron) model to predict essential miRNAs. We conducted five-fold cross-validation to evaluate the prediction performance of PMMS. The areas under the ROC curves (AUC), the F1-score, and accuracy (ACC) are used as performance metrics. From the experimental results, PMMS obtained best prediction performances (AUC: 0.9556, F1-score: 0.9030, and ACC: 0.9097). It also outperformed other compared methods. The experimental results also illustrated that PMMS is an effective method to identify essential miRNA.
Collapse
|
40
|
Liu Y, He Q, Duan H, Shi H, Han A, He Y. Using Sparse Patch Annotation for Tumor Segmentation in Histopathological Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:6053. [PMID: 36015814 PMCID: PMC9414209 DOI: 10.3390/s22166053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 08/05/2022] [Accepted: 08/10/2022] [Indexed: 06/15/2023]
Abstract
Tumor segmentation is a fundamental task in histopathological image analysis. Creating accurate pixel-wise annotations for such segmentation tasks in a fully-supervised training framework requires significant effort. To reduce the burden of manual annotation, we propose a novel weakly supervised segmentation framework based on sparse patch annotation, i.e., only small portions of patches in an image are labeled as 'tumor' or 'normal'. The framework consists of a patch-wise segmentation model called PSeger, and an innovative semi-supervised algorithm. PSeger has two branches for patch classification and image classification, respectively. This two-branch structure enables the model to learn more general features and thus reduce the risk of overfitting when learning sparsely annotated data. We incorporate the idea of consistency learning and self-training into the semi-supervised training strategy to take advantage of the unlabeled images. Trained on the BCSS dataset with only 25% of the images labeled (five patches for each labeled image), our proposed method achieved competitive performance compared to the fully supervised pixel-wise segmentation models. Experiments demonstrate that the proposed solution has the potential to reduce the burden of labeling histopathological images.
Collapse
Affiliation(s)
- Yiqing Liu
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China
| | - Qiming He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China
| | - Hufei Duan
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China
| | - Huijuan Shi
- Department of Pathology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou 510080, China
| | - Anjia Han
- Department of Pathology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou 510080, China
| | - Yonghong He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China
| |
Collapse
|
41
|
Swin Transformer Improves the IDH Mutation Status Prediction of Gliomas Free of MRI-Based Tumor Segmentation. J Clin Med 2022; 11:jcm11154625. [PMID: 35956236 PMCID: PMC9369996 DOI: 10.3390/jcm11154625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/03/2022] [Accepted: 08/05/2022] [Indexed: 11/16/2022] Open
Abstract
Background: Deep learning (DL) could predict isocitrate dehydrogenase (IDH) mutation status from MRIs. Yet, previous work focused on CNNs with refined tumor segmentation. To bridge the gap, this study aimed to evaluate the feasibility of developing a Transformer-based network to predict the IDH mutation status free of refined tumor segmentation. Methods: A total of 493 glioma patients were recruited from two independent institutions for model development (TCIA; N = 259) and external test (AHXZ; N = 234). IDH mutation status was predicted directly from T2 images with a Swin Transformer and conventional ResNet. Furthermore, to investigate the necessity of refined tumor segmentation, seven strategies for the model input image were explored: (i) whole tumor slice; (ii-iii) tumor mask and/or not edema; (iv-vii) tumor bounding box of 0.8, 1.0, 1.2, 1.5 times. Performance comparison was made among the networks of different architectures along with different image input strategies, using area under the curve (AUC) and accuracy (ACC). Finally, to further boost the performance, a hybrid model was built by incorporating the images with clinical features. Results: With the seven proposed input strategies, seven Swin Transformer models and seven ResNet models were built, respectively. Based on the seven Swin Transformer models, an averaged AUC of 0.965 (internal test) and 0.842 (external test) were achieved, outperforming 0.922 and 0.805 resulting from the seven ResNet models, respectively. When a bounding box of 1.0 times was used, Swin Transformer (AUC = 0.868, ACC = 80.7%), achieved the best results against the one that used tumor segmentation (Tumor + Edema, AUC = 0.862, ACC = 78.5%). The hybrid model that integrated age and location features into images yielded improved performance (AUC = 0.878, Accuracy = 82.0%) over the model that used images only. Conclusions: Swin Transformer outperforms the CNN-based ResNet in IDH prediction. Using bounding box input images benefits the DL networks in IDH prediction and makes the IDH prediction free of refined glioma segmentation feasible.
Collapse
|
42
|
Diagnosis of middle cerebral artery stenosis using the transcranial Doppler images based on convolutional neural network. World Neurosurg 2022; 161:e118-e125. [PMID: 35077885 DOI: 10.1016/j.wneu.2022.01.068] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Revised: 01/16/2022] [Accepted: 01/17/2022] [Indexed: 11/23/2022]
Abstract
BACKGROUND The purpose of this study was to explore the diagnostic value of convolutional neural networks (CNNs) in middle cerebral artery (MCA) stenosis by analyzing the transcranial Doppler (TCD) images. METHODS Overall 278 patients who underwent cerebral vascular TCD and cerebral angiography were enrolled and classified into stenosis and non-stenosis groups based on cerebral angiography findings. Manual measurements were performed on TCD images. The patients were divided into a training set and a test set, and the CNNs architecture was used to classify TCD images. The diagnostic accuracies of manual measurements, CNNs, and TCD parameters for MCA stenosis were calculated and compared. RESULTS Overall, 203 patients without stenosis and 75 patients with stenosis were evaluated. The sensitivity, specificity, and area under the curve (AUC) for manual measurements of MCA stenosis were 0.80, 0.83, and 0.81, respectively. After 24 iterations of the running model in the training set, the sensitivity, specificity, and AUC of the CNNs in the test set were 0.84, 0.86, and 0.80, respectively. The diagnostic value of CNNs differed minimally from that of manual measurements. Two parameters of TCD, peak systolic velocity and mean flow velocity, were higher in patients with stenosis than in those without stenosis; however, their diagnostic values were significantly lower than those of CNNs (P < 0.05). CONCLUSIONS The diagnostic value of CNNs for MCA stenosis based on TCD images paralleled that of manual measurements. CNNs could be used as an auxiliary diagnostic tool to improve the diagnosis of MCA stenosis.
Collapse
|