1
|
Cui H, Li Y, Wang Y, Xu D, Wu LM, Xia Y. Toward Accurate Cardiac MRI Segmentation With Variational Autoencoder-Based Unsupervised Domain Adaptation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2924-2936. [PMID: 38546999 DOI: 10.1109/tmi.2024.3382624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/02/2024]
Abstract
Accurate myocardial segmentation is crucial in the diagnosis and treatment of myocardial infarction (MI), especially in Late Gadolinium Enhancement (LGE) cardiac magnetic resonance (CMR) images, where the infarcted myocardium exhibits a greater brightness. However, segmentation annotations for LGE images are usually not available. Although knowledge gained from CMR images of other modalities with ample annotations, such as balanced-Steady State Free Precession (bSSFP), can be transferred to the LGE images, the difference in image distribution between the two modalities (i.e., domain shift) usually results in a significant degradation in model performance. To alleviate this, an end-to-end Variational autoencoder based feature Alignment Module Combining Explicit and Implicit features (VAMCEI) is proposed. We first re-derive the Kullback-Leibler (KL) divergence between the posterior distributions of the two domains as a measure of the global distribution distance. Second, we calculate the prototype contrastive loss between the two domains, bringing closer the prototypes of the same category across domains and pushing away the prototypes of different categories within or across domains. Finally, a domain discriminator is added to the output space, which indirectly aligns the feature distribution and forces the extracted features to be more favorable for segmentation. In addition, by combining CycleGAN and VAMCEI, we propose a more refined multi-stage unsupervised domain adaptation (UDA) framework for myocardial structure segmentation. We conduct extensive experiments on the MSCMRSeg 2019, MyoPS 2020 and MM-WHS 2017 datasets. The experimental results demonstrate that our framework achieves superior performances than state-of-the-art methods.
Collapse
|
2
|
Ding N, Yuan Z, Ma Z, Wu Y, Yin L. AI-Assisted Rational Design and Activity Prediction of Biological Elements for Optimizing Transcription-Factor-Based Biosensors. Molecules 2024; 29:3512. [PMID: 39124917 PMCID: PMC11313831 DOI: 10.3390/molecules29153512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2024] [Revised: 07/22/2024] [Accepted: 07/24/2024] [Indexed: 08/12/2024] Open
Abstract
The rational design, activity prediction, and adaptive application of biological elements (bio-elements) are crucial research fields in synthetic biology. Currently, a major challenge in the field is efficiently designing desired bio-elements and accurately predicting their activity using vast datasets. The advancement of artificial intelligence (AI) technology has enabled machine learning and deep learning algorithms to excel in uncovering patterns in bio-element data and predicting their performance. This review explores the application of AI algorithms in the rational design of bio-elements, activity prediction, and the regulation of transcription-factor-based biosensor response performance using AI-designed elements. We discuss the advantages, adaptability, and biological challenges addressed by the AI algorithms in various applications, highlighting their powerful potential in analyzing biological data. Furthermore, we propose innovative solutions to the challenges faced by AI algorithms in the field and suggest future research directions. By consolidating current research and demonstrating the practical applications and future potential of AI in synthetic biology, this review provides valuable insights for advancing both academic research and practical applications in biotechnology.
Collapse
Affiliation(s)
- Nana Ding
- State Key Laboratory of Subtropical Silviculture, Zhejiang A&F University, Hangzhou 311300, China;
- Zhejiang Provincial Key Laboratory of Resources Protection and Innovation of Traditional Chinese Medicine, Zhejiang A&F University, Hangzhou 311300, China
| | - Zenan Yuan
- State Key Laboratory of Subtropical Silviculture, Zhejiang A&F University, Hangzhou 311300, China;
- Zhejiang Provincial Key Laboratory of Resources Protection and Innovation of Traditional Chinese Medicine, Zhejiang A&F University, Hangzhou 311300, China
| | - Zheng Ma
- Zhejiang Provincial Key Laboratory of Biometrology and Inspection & Quarantine, College of Life Sciences, China Jiliang University, Hangzhou 310018, China;
| | - Yefei Wu
- Zhejiang Qianjiang Biochemical Co., Ltd., Haining 314400, China;
| | - Lianghong Yin
- State Key Laboratory of Subtropical Silviculture, Zhejiang A&F University, Hangzhou 311300, China;
- Zhejiang Provincial Key Laboratory of Resources Protection and Innovation of Traditional Chinese Medicine, Zhejiang A&F University, Hangzhou 311300, China
| |
Collapse
|
3
|
Wang H, Jin Q, Li S, Liu S, Wang M, Song Z. A comprehensive survey on deep active learning in medical image analysis. Med Image Anal 2024; 95:103201. [PMID: 38776841 DOI: 10.1016/j.media.2024.103201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 04/25/2024] [Accepted: 05/06/2024] [Indexed: 05/25/2024]
Abstract
Deep learning has achieved widespread success in medical image analysis, leading to an increasing demand for large-scale expert-annotated medical image datasets. Yet, the high cost of annotating medical images severely hampers the development of deep learning in this field. To reduce annotation costs, active learning aims to select the most informative samples for annotation and train high-performance models with as few labeled samples as possible. In this survey, we review the core methods of active learning, including the evaluation of informativeness and sampling strategy. For the first time, we provide a detailed summary of the integration of active learning with other label-efficient techniques, such as semi-supervised, self-supervised learning, and so on. We also summarize active learning works that are specifically tailored to medical image analysis. Additionally, we conduct a thorough comparative analysis of the performance of different AL methods in medical image analysis with experiments. In the end, we offer our perspectives on the future trends and challenges of active learning and its applications in medical image analysis. An accompanying paper list and code for the comparative analysis is available in https://github.com/LightersWang/Awesome-Active-Learning-for-Medical-Image-Analysis.
Collapse
Affiliation(s)
- Haoran Wang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai 200032, China; Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, China
| | - Qiuye Jin
- Computational Bioscience Research Center (CBRC), King Abdullah University of Science and Technology (KAUST), Thuwal 23955, Saudi Arabia
| | - Shiman Li
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai 200032, China; Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, China
| | - Siyu Liu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai 200032, China; Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, China
| | - Manning Wang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai 200032, China; Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, China.
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai 200032, China; Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, China.
| |
Collapse
|
4
|
Liu Q, Yue J, Kuang Y, Xie W, Fang L. SemiRS-COC: Semi-Supervised Classification for Complex Remote Sensing Scenes With Cross-Object Consistency. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:3855-3870. [PMID: 38896517 DOI: 10.1109/tip.2024.3414122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
Semi-supervised learning (SSL), which aims to learn with limited labeled data and massive amounts of unlabeled data, offers a promising approach to exploit the massive amounts of satellite Earth observation images. The fundamental concept underlying most state-of-the-art SSL methods involves generating pseudo-labels for unlabeled data based on image-level predictions. However, complex remote sensing (RS) scene images frequently encounter challenges, such as interference from multiple background objects and significant intra-class differences, resulting in unreliable pseudo-labels. In this paper, we propose the SemiRS-COC, a novel semi-supervised classification method for complex RS scenes. Inspired by the idea that neighboring objects in feature space should share consistent semantic labels, SemiRS-COC utilizes the similarity between foreground objects in RS images to generate reliable object-level pseudo-labels, effectively addressing the issues of multiple background objects and significant intra-class differences in complex RS images. Specifically, we first design a Local Self-Learning Object Perception (LSLOP) mechanism, which transforms multiple background objects interference of RS images into usable annotation information, enhancing the model's object perception capability. Furthermore, we present a Cross-Object Consistency Pseudo-Labeling (COCPL) strategy, which generates reliable object-level pseudo-labels by comparing the similarity of foreground objects across different RS images, effectively handling significant intra-class differences. Extensive experiments demonstrate that our proposed method achieves excellent performance compared to state-of-the-art methods on three widely-adopted RS datasets.
Collapse
|
5
|
Wang M, Lin T, Wang L, Lin A, Zou K, Xu X, Zhou Y, Peng Y, Meng Q, Qian Y, Deng G, Wu Z, Chen J, Lin J, Zhang M, Zhu W, Zhang C, Zhang D, Goh RSM, Liu Y, Pang CP, Chen X, Chen H, Fu H. Uncertainty-inspired open set learning for retinal anomaly identification. Nat Commun 2023; 14:6757. [PMID: 37875484 PMCID: PMC10598011 DOI: 10.1038/s41467-023-42444-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 10/11/2023] [Indexed: 10/26/2023] Open
Abstract
Failure to recognize samples from the classes unseen during training is a major limitation of artificial intelligence in the real-world implementation for recognition and classification of retinal anomalies. We establish an uncertainty-inspired open set (UIOS) model, which is trained with fundus images of 9 retinal conditions. Besides assessing the probability of each category, UIOS also calculates an uncertainty score to express its confidence. Our UIOS model with thresholding strategy achieves an F1 score of 99.55%, 97.01% and 91.91% for the internal testing set, external target categories (TC)-JSIEC dataset and TC-unseen testing set, respectively, compared to the F1 score of 92.20%, 80.69% and 64.74% by the standard AI model. Furthermore, UIOS correctly predicts high uncertainty scores, which would prompt the need for a manual check in the datasets of non-target categories retinal diseases, low-quality fundus images, and non-fundus images. UIOS provides a robust method for real-world screening of retinal anomalies.
Collapse
Affiliation(s)
- Meng Wang
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Tian Lin
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China
| | - Lianyu Wang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, 211100, Nanjing, Jiangsu, China
- Laboratory of Brain-Machine Intelligence Technology, Ministry of Education Nanjing University of Aeronautics and Astronautics, 211106, Nanjing, Jiangsu, China
| | - Aidi Lin
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China
| | - Ke Zou
- National Key Laboratory of Fundamental Science on Synthetic Vision and the College of Computer Science, Sichuan University, 610065, Chengdu, Sichuan, China
| | - Xinxing Xu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Yi Zhou
- School of Electronics and Information Engineering, Soochow University, 215006, Suzhou, Jiangsu, China
| | - Yuanyuan Peng
- School of Biomedical Engineering, Anhui Medical University, 230032, Hefei, Anhui, China
| | - Qingquan Meng
- School of Electronics and Information Engineering, Soochow University, 215006, Suzhou, Jiangsu, China
| | - Yiming Qian
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Guoyao Deng
- National Key Laboratory of Fundamental Science on Synthetic Vision and the College of Computer Science, Sichuan University, 610065, Chengdu, Sichuan, China
| | - Zhiqun Wu
- Longchuan People's Hospital, 517300, Heyuan, Guangdong, China
| | - Junhong Chen
- Puning People's Hospital, 515300, Jieyang, Guangdong, China
| | - Jianhong Lin
- Haifeng PengPai Memory Hospital, 516400, Shanwei, Guangdong, China
| | - Mingzhi Zhang
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China
| | - Weifang Zhu
- School of Electronics and Information Engineering, Soochow University, 215006, Suzhou, Jiangsu, China
| | - Changqing Zhang
- College of Intelligence and Computing, Tianjin University, 300350, Tianjin, China
| | - Daoqiang Zhang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, 211100, Nanjing, Jiangsu, China
- Laboratory of Brain-Machine Intelligence Technology, Ministry of Education Nanjing University of Aeronautics and Astronautics, 211106, Nanjing, Jiangsu, China
| | - Rick Siow Mong Goh
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Yong Liu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore
| | - Chi Pui Pang
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, 999077, Hong Kong, China
| | - Xinjian Chen
- School of Electronics and Information Engineering, Soochow University, 215006, Suzhou, Jiangsu, China.
- State Key Laboratory of Radiation Medicine and Protection, Soochow University, 215006, Suzhou, China.
| | - Haoyu Chen
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, 515041, Shantou, Guangdong, China.
| | - Huazhu Fu
- Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore, 138632, Republic of Singapore.
| |
Collapse
|
6
|
Gilotra K, Swarna S, Mani R, Basem J, Dashti R. Role of artificial intelligence and machine learning in the diagnosis of cerebrovascular disease. Front Hum Neurosci 2023; 17:1254417. [PMID: 37746051 PMCID: PMC10516608 DOI: 10.3389/fnhum.2023.1254417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 08/23/2023] [Indexed: 09/26/2023] Open
Abstract
Introduction Cerebrovascular diseases are known to cause significant morbidity and mortality to the general population. In patients with cerebrovascular disease, prompt clinical evaluation and radiographic interpretation are both essential in optimizing clinical management and in triaging patients for critical and potentially life-saving neurosurgical interventions. With recent advancements in the domains of artificial intelligence (AI) and machine learning (ML), many AI and ML algorithms have been developed to further optimize the diagnosis and subsequent management of cerebrovascular disease. Despite such advances, further studies are needed to substantively evaluate both the diagnostic accuracy and feasibility of these techniques for their application in clinical practice. This review aims to analyze the current use of AI and MI algorithms in the diagnosis of, and clinical decision making for cerebrovascular disease, and to discuss both the feasibility and future applications of utilizing such algorithms. Methods We review the use of AI and ML algorithms to assist clinicians in the diagnosis and management of ischemic stroke, hemorrhagic stroke, intracranial aneurysms, and arteriovenous malformations (AVMs). After identifying the most widely used algorithms, we provide a detailed analysis of the accuracy and effectiveness of these algorithms in practice. Results The incorporation of AI and ML algorithms for cerebrovascular patients has demonstrated improvements in time to detection of intracranial pathologies such as intracerebral hemorrhage (ICH) and infarcts. For ischemic and hemorrhagic strokes, commercial AI software platforms such as RapidAI and Viz.AI have bene implemented into routine clinical practice at many stroke centers to expedite the detection of infarcts and ICH, respectively. Such algorithms and neural networks have also been analyzed for use in prognostication for such cerebrovascular pathologies. These include predicting outcomes for ischemic stroke patients, hematoma expansion, risk of aneurysm rupture, bleeding of AVMs, and in predicting outcomes following interventions such as risk of occlusion for various endovascular devices. Preliminary analyses have yielded promising sensitivities when AI and ML are used in concert with imaging modalities and a multidisciplinary team of health care providers. Conclusion The implementation of AI and ML algorithms to supplement clinical practice has conferred a high degree of accuracy, efficiency, and expedited detection in the clinical and radiographic evaluation and management of ischemic and hemorrhagic strokes, AVMs, and aneurysms. Such algorithms have been explored for further purposes of prognostication for these conditions, with promising preliminary results. Further studies should evaluate the longitudinal implementation of such techniques into hospital networks and residency programs to supplement clinical practice, and the extent to which these techniques improve patient care and clinical outcomes in the long-term.
Collapse
Affiliation(s)
| | | | | | | | - Reza Dashti
- Dashti Lab, Department of Neurological Surgery, Stony Brook University Hospital, Stony Brook, NY, United States
| |
Collapse
|