1
|
Khatun Z, Jónsson H, Tsirilaki M, Maffulli N, Oliva F, Daval P, Tortorella F, Gargiulo P. Beyond pixel: Superpixel-based MRI segmentation through traditional machine learning and graph convolutional network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 256:108398. [PMID: 39236562 DOI: 10.1016/j.cmpb.2024.108398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Revised: 08/21/2024] [Accepted: 08/25/2024] [Indexed: 09/07/2024]
Abstract
BACKGROUND AND OBJECTIVE Tendon segmentation is crucial for studying tendon-related pathologies like tendinopathy, tendinosis, etc. This step further enables detailed analysis of specific tendon regions using automated or semi-automated methods. This study specifically aims at the segmentation of Achilles tendon, the largest tendon in the human body. METHODS This study proposes a comprehensive end-to-end tendon segmentation module composed of a preliminary superpixel-based coarse segmentation preceding the final segmentation task. The final segmentation results are obtained through two distinct approaches. In the first approach, the coarsely generated superpixels are subjected to classification using Random Forest (RF) and Support Vector Machine (SVM) classifiers to classify whether each superpixel belongs to a tendon class or not (resulting in tendon segmentation). In the second approach, the arrangements of superpixels are converted to graphs instead of being treated as conventional image grids. This classification process uses a graph-based convolutional network (GCN) to determine whether each superpixel corresponds to a tendon class or not. RESULTS All experiments are conducted on a custom-made ankle MRI dataset. The dataset comprises 76 subjects and is divided into two sets: one for training (Dataset 1, trained and evaluated using leave-one-group-out cross-validation) and the other as unseen test data (Dataset 2). Using our first approach, the final test AUC (Area Under the ROC Curve) scores using RF and SVM classifiers on the test data (Dataset 2) are 0.992 and 0.987, respectively, with sensitivities of 0.904 and 0.966. On the other hand, using our second approach (GCN-based node classification), the AUC score for the test set is 0.933 with a sensitivity of 0.899. CONCLUSIONS Our proposed pipeline demonstrates the efficacy of employing superpixel generation as a coarse segmentation technique for the final tendon segmentation. Whether utilizing RF, SVM-based superpixel classification, or GCN-based classification for tendon segmentation, our system consistently achieves commendable AUC scores, especially the non-graph-based approach. Given the limited dataset, our graph-based method did not perform as well as non-graph-based superpixel classifications; however, the results obtained provide valuable insights into how well the models can distinguish between tendons and non-tendons. This opens up opportunities for further exploration and improvement.
Collapse
Affiliation(s)
- Zakia Khatun
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, Salerno, Italy; Institute of Biomedical and Neural Engineering, Department of Engineering, Reykjavik University, Reykjavik, Iceland.
| | - Halldór Jónsson
- Department of Orthopaedics, Landspitali University Hospital, Reykjavik, Iceland
| | - Mariella Tsirilaki
- Department of Radiology, Landspitali University Hospital, Reykjavik, Iceland
| | - Nicola Maffulli
- Department of Trauma and Orthopaedic Surgery, Faculty of Medicine and Psychology, University Hospital Sant'Andrea, University La Sapienza, Rome, Italy; School of Pharmacy and Bioengineering, Faculty of Medicine, Keele University, ST4 7QB Stoke on Trent, England; Queen Mary University of London, Barts and the London School of Medicine and Dentistry, Centre for Sports and Exercise Medicine, Mile End Hospital, London, England
| | - Francesco Oliva
- Department of Human Sciences and Promotion of the Quality of Life, San Raffaele Roma Open University, Rome, Italy
| | - Pauline Daval
- Biomedical Department, École Polytechnique Universitaire d'Aix-Marseille, Marseille, France
| | - Francesco Tortorella
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, Salerno, Italy
| | - Paolo Gargiulo
- Institute of Biomedical and Neural Engineering, Department of Engineering, Reykjavik University, Reykjavik, Iceland; Department of Science, Landspitali University Hospital, Reykjavik, Iceland
| |
Collapse
|
2
|
Wang X, Cui W, Wu H, Huo Y, Xu X. Hybrid-feature based spherical quasi-conformal registration for AD-induced hippocampal surface morphological changes. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 256:108372. [PMID: 39178503 DOI: 10.1016/j.cmpb.2024.108372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 08/06/2024] [Accepted: 08/10/2024] [Indexed: 08/26/2024]
Abstract
BACKGROUND AND OBJECTIVE Establishing accurate one-to-one morphological correspondence between different hippocampal surfaces is a solid foundation for the analysis of AD-induced hippocampal morphological changes. However, owing to the large variations between hippocampal surfaces, exiting registration work either fails to obtain the accurate matching of local and overall morphological features or does not preserve the bijectivity during parametric mapping. For this reason, this study proposes a hybrid-feature based spherical quasi-conformal registration (HSQR) method that can effectively maintain the diffeomorphic property while meeting the hybrid-feature matching constraints in the spherical parameter domain. METHODS The HSQR algorithm is primarily achieved through hippocampal surface hybrid feature extraction and spherical quasi-conformal registration. First, hybrid features for a comprehensive morphological description of the hippocampal surface were established, which included essential anatomical features (landmarks) and mean curvature (intensity) features to ensure the accuracy of surface morphology alignment. Second, spherical parameterization was applied to genus-0 closed surfaces, such as the hippocampus, which maximized the preservation of the original local surface morphology through area-preserving properties. Third, a novel spherical quasi-conformal registration algorithm that can handle large deformations is established. It transforms a 3D spherical parameter domain into a 2D plane parameter domain using iterative local stereo projection to improve the efficiency of the registration algorithm. Subsequently, by controlling the Beltramin coefficient, the hybrid morphological features could be aligned while ensuring bijection before and after registration. RESULTS Using a cohort including 161 patients with amyloid-β (Aβ) positive Alzheimer disease (AD), 234 Aβ positive mild cognitive impairment (MCI) and 266 Aβ negative cognitively unimpaired (CU) individuals from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, we set up the experiment which indicated that the HSQR-based whole bilateral hippocampal atrophy features demonstrated the stronger statistical power for group morphological differences of CU vs. MCI with q-value: 0.0453 for left hippocampus and 0.0401 for right hippocampus and group morphological differences of AD vs. MCI with q-value: 0.0282 for left hippocampus and 0.0421 for right hippocampus. CONCLUSIONS Our registration algorithm may provide a solid foundation for the accurate quantification of hippocampal surface morphological changes for the differential diagnosis and tracking of AD.
Collapse
Affiliation(s)
- Xiangying Wang
- First College of Clinical Medicine, Shandong University of Traditional Chinese Medicine, Jinan, China
| | - Wenqiang Cui
- Department of Neurology, Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, China
| | - Hongyun Wu
- Department of Neurology, Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, China
| | - Yongjun Huo
- Department of Radiology, Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, China
| | - Xiangqing Xu
- Department of Neurology, Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, China.
| |
Collapse
|
3
|
Li J, Yu Z, Du Z, Zhu L, Shen HT. A Comprehensive Survey on Source-Free Domain Adaptation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2024; 46:5743-5762. [PMID: 38416606 DOI: 10.1109/tpami.2024.3370978] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/01/2024]
Abstract
Over the past decade, domain adaptation has become a widely studied branch of transfer learning which aims to improve performance on target domains by leveraging knowledge from the source domain. Conventional domain adaptation methods often assume access to both source and target domain data simultaneously, which may not be feasible in real-world scenarios due to privacy and confidentiality concerns. As a result, the research of Source-Free Domain Adaptation (SFDA) has drawn growing attention in recent years, which only utilizes the source-trained model and unlabeled target data to adapt to the target domain. Despite the rapid explosion of SFDA work, there has been no timely and comprehensive survey in the field. To fill this gap, we provide a comprehensive survey of recent advances in SFDA and organize them into a unified categorization scheme based on the framework of transfer learning. Instead of presenting each approach independently, we modularize several components of each method to more clearly illustrate their relationships and mechanisms in light of the composite properties of each method. Furthermore, we compare the results of more than 30 representative SFDA methods on three popular classification benchmarks, namely Office-31, Office-home, and VisDA, to explore the effectiveness of various technical routes and the combination effects among them. Additionally, we briefly introduce the applications of SFDA and related fields. Drawing on our analysis of the challenges confronting SFDA, we offer some insights into future research directions and potential settings.
Collapse
|
4
|
Liu X, Qu L, Xie Z, Zhao J, Shi Y, Song Z. Towards more precise automatic analysis: a systematic review of deep learning-based multi-organ segmentation. Biomed Eng Online 2024; 23:52. [PMID: 38851691 PMCID: PMC11162022 DOI: 10.1186/s12938-024-01238-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 04/11/2024] [Indexed: 06/10/2024] Open
Abstract
Accurate segmentation of multiple organs in the head, neck, chest, and abdomen from medical images is an essential step in computer-aided diagnosis, surgical navigation, and radiation therapy. In the past few years, with a data-driven feature extraction approach and end-to-end training, automatic deep learning-based multi-organ segmentation methods have far outperformed traditional methods and become a new research topic. This review systematically summarizes the latest research in this field. We searched Google Scholar for papers published from January 1, 2016 to December 31, 2023, using keywords "multi-organ segmentation" and "deep learning", resulting in 327 papers. We followed the PRISMA guidelines for paper selection, and 195 studies were deemed to be within the scope of this review. We summarized the two main aspects involved in multi-organ segmentation: datasets and methods. Regarding datasets, we provided an overview of existing public datasets and conducted an in-depth analysis. Concerning methods, we categorized existing approaches into three major classes: fully supervised, weakly supervised and semi-supervised, based on whether they require complete label information. We summarized the achievements of these methods in terms of segmentation accuracy. In the discussion and conclusion section, we outlined and summarized the current trends in multi-organ segmentation.
Collapse
Affiliation(s)
- Xiaoyu Liu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Linhao Qu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Ziyue Xie
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Jiayue Zhao
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Yonghong Shi
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, 138 Yixueyuan Road, Shanghai, 200032, People's Republic of China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| |
Collapse
|
5
|
Wang D, Han C, Zhang Z, Zhai T, Lin H, Yang B, Cui Y, Lin Y, Zhao Z, Zhao L, Liang C, Zeng A, Pan D, Chen X, Shi Z, Liu Z. FedDUS: Lung tumor segmentation on CT images through federated semi-supervised with dynamic update strategy. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 249:108141. [PMID: 38574423 DOI: 10.1016/j.cmpb.2024.108141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/01/2024] [Revised: 02/27/2024] [Accepted: 03/19/2024] [Indexed: 04/06/2024]
Abstract
BACKGROUND AND OBJECTIVE Lung tumor annotation is a key upstream task for further diagnosis and prognosis. Although deep learning techniques have promoted automation of lung tumor segmentation, there remain challenges impeding its application in clinical practice, such as a lack of prior annotation for model training and data-sharing among centers. METHODS In this paper, we use data from six centers to design a novel federated semi-supervised learning (FSSL) framework with dynamic model aggregation and improve segmentation performance for lung tumors. To be specific, we propose a dynamically updated algorithm to deal with model parameter aggregation in FSSL, which takes advantage of both the quality and quantity of client data. Moreover, to increase the accessibility of data in the federated learning (FL) network, we explore the FAIR data principle while the previous federated methods never involve. RESULT The experimental results show that the segmentation performance of our model in six centers is 0.9348, 0.8436, 0.8328, 0.7776, 0.8870 and 0.8460 respectively, which is superior to traditional deep learning methods and recent federated semi-supervised learning methods. CONCLUSION The experimental results demonstrate that our method is superior to the existing FSSL methods. In addition, our proposed dynamic update strategy effectively utilizes the quality and quantity information of client data and shows efficiency in lung tumor segmentation. The source code is released on (https://github.com/GDPHMediaLab/FedDUS).
Collapse
Affiliation(s)
- Dan Wang
- School of Computers, Guangdong University of Technology, Guangzhou 510006, China
| | - Chu Han
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Zhen Zhang
- Zhejiang Cancer Hospital, Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou, Zhejiang, 310022, China
| | - Tiantian Zhai
- Department of Radiation Oncology, Cancer Hospital of Shantou University Medical College, Shantou, China
| | - Huan Lin
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Baoyao Yang
- School of Computers, Guangdong University of Technology, Guangzhou 510006, China
| | - Yanfen Cui
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Department of Radiology, Shanxi Province Cancer Hospital/Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University, Taiyuan, 030013, China
| | - Yinbing Lin
- Department of Radiation Oncology, Cancer Hospital of Shantou University Medical College, Shantou, China
| | - Zhihe Zhao
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Lujun Zhao
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin's Clinical Research Center for Cancer, Tianjin, 300060, China
| | - Changhong Liang
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - An Zeng
- School of Computers, Guangdong University of Technology, Guangzhou 510006, China.
| | - Dan Pan
- School of Electronics and Information, Guangdong Polytechnic Normal University, Guangzhou 510665, China.
| | - Xin Chen
- Department of Radiology, Guangzhou First People's Hospital, School of Medicine, South China University of Technology, Guangzhou, Guangdong, China.
| | - Zhenwei Shi
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China.
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China.
| |
Collapse
|
6
|
Yang T, Hu H, Li X, Meng Q, Lu H, Huang Q. An efficient Fusion-Purification Network for Cervical pap-smear image classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 251:108199. [PMID: 38728830 DOI: 10.1016/j.cmpb.2024.108199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/28/2024] [Accepted: 04/21/2024] [Indexed: 05/12/2024]
Abstract
BACKGROUND AND OBJECTIVES In cervical cell diagnostics, autonomous screening technology constitutes the foundation of automated diagnostic systems. Currently, numerous deep learning-based classification techniques have been successfully implemented in the analysis of cervical cell images, yielding favorable outcomes. Nevertheless, efficient discrimination of cervical cells continues to be challenging due to large intra-class and small inter-class variations. The key to dealing with this problem is to capture localized informative differences from cervical cell images and to represent discriminative features efficiently. Existing methods neglect the importance of global morphological information, resulting in inadequate feature representation capability. METHODS To address this limitation, we propose a novel cervical cell classification model that focuses on purified fusion information. Specifically, we first integrate the detailed texture information and morphological structure features, named cervical pathology information fusion. Second, in order to enhance the discrimination of cervical cell features and address the data redundancy and bias inherent after fusion, we design a cervical purification bottleneck module. This model strikes a balance between leveraging purified features and facilitating high-efficiency discrimination. Furthermore, we intend to unveil a more intricate cervical cell dataset: Cervical Cytopathology Image Dataset (CCID). RESULTS Extensive experiments on two real-world datasets show that our proposed model outperforms state-of-the-art cervical cell classification models. CONCLUSIONS The results show that our method can well help pathologists to accurately evaluate cervical smears.
Collapse
Affiliation(s)
- Tianjin Yang
- College of Computer and Information, Hohai University, Nanjing, 211100, PR China.
| | - Hexuan Hu
- College of Computer and Information, Hohai University, Nanjing, 211100, PR China.
| | - Xing Li
- College of information Science and Technology & College of Artificial Intelligence, Nanjing Forestry University, Nanjing 210037, PR China.
| | - Qing Meng
- College of Computer and Information, Hohai University, Nanjing, 211100, PR China.
| | - Hao Lu
- College of Computer and Information, Hohai University, Nanjing, 211100, PR China.
| | - Qian Huang
- College of Computer and Information, Hohai University, Nanjing, 211100, PR China.
| |
Collapse
|
7
|
Fang Y, Yap PT, Lin W, Zhu H, Liu M. Source-free unsupervised domain adaptation: A survey. Neural Netw 2024; 174:106230. [PMID: 38490115 PMCID: PMC11015964 DOI: 10.1016/j.neunet.2024.106230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 01/14/2024] [Accepted: 03/07/2024] [Indexed: 03/17/2024]
Abstract
Unsupervised domain adaptation (UDA) via deep learning has attracted appealing attention for tackling domain-shift problems caused by distribution discrepancy across different domains. Existing UDA approaches highly depend on the accessibility of source domain data, which is usually limited in practical scenarios due to privacy protection, data storage and transmission cost, and computation burden. To tackle this issue, many source-free unsupervised domain adaptation (SFUDA) methods have been proposed recently, which perform knowledge transfer from a pre-trained source model to the unlabeled target domain with source data inaccessible. A comprehensive review of these works on SFUDA is of great significance. In this paper, we provide a timely and systematic literature review of existing SFUDA approaches from a technical perspective. Specifically, we categorize current SFUDA studies into two groups, i.e., white-box SFUDA and black-box SFUDA, and further divide them into finer subcategories based on different learning strategies they use. We also investigate the challenges of methods in each subcategory, discuss the advantages/disadvantages of white-box and black-box SFUDA methods, conclude the commonly used benchmark datasets, and summarize the popular techniques for improved generalizability of models learned without using source data. We finally discuss several promising future directions in this field.
Collapse
Affiliation(s)
- Yuqi Fang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Hongtu Zhu
- Department of Biostatistics and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States.
| |
Collapse
|
8
|
Lin H, Zou J, Wang K, Feng Y, Xu C, Lyu J, Qin J. Dual-space high-frequency learning for transformer-based MRI super-resolution. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108165. [PMID: 38631131 DOI: 10.1016/j.cmpb.2024.108165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 03/28/2024] [Accepted: 04/05/2024] [Indexed: 04/19/2024]
Abstract
BACKGROUND AND OBJECTIVE Magnetic resonance imaging (MRI) can provide rich and detailed high-contrast information of soft tissues, while the scanning of MRI is time-consuming. To accelerate MR imaging, a variety of Transformer-based single image super-resolution methods are proposed in recent years, achieving promising results thanks to their superior capability of capturing long-range dependencies. Nevertheless, most existing works prioritize the design of transformer attention blocks to capture global information. The local high-frequency details, which are pivotal to faithful MRI restoration, are unfortunately neglected. METHODS In this work, we propose a high-frequency enhanced learning scheme to effectively improve the awareness of high frequency information in current Transformer-based MRI single image super-resolution methods. Specifically, we present two entirely plug-and-play modules designed to equip Transformer-based networks with the ability to recover high-frequency details from dual spaces: 1) in the feature space, we design a high-frequency block (Hi-Fe block) paralleled with Transformer-based attention layers to extract rich high-frequency features; while 2) in the image intensity space, we tailor a high-frequency amplification module (HFA) to further refine the high-frequency details. By fully exploiting the merits of the two modules, our framework can recover abundant and diverse high-frequency information, rendering faithful MRI super-resolved results with fine details. RESULTS We integrated our modules with six Transformer-based models and conducted experiments across three datasets. The results indicate that our plug-and-play modules can enhance the super-resolution performance of all foundational models to varying degrees, surpassing the capabilities of existing state-of-the-art single image super-resolution networks. CONCLUSION Comprehensive comparison of super-resolution images and high-frequency maps from various methods, clearly demonstrating that our module possesses the capability to restore high-frequency information, showing huge potential in clinical practice for accelerated MRI reconstruction.
Collapse
Affiliation(s)
- Haoneng Lin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| | - Jing Zou
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong.
| | - Kang Wang
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| | - Yidan Feng
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| | - Cheng Xu
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| | - Jun Lyu
- Brigham and Women's Hospital, Harvard Medical School, Boston, United States
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong
| |
Collapse
|
9
|
Li S, Wang H, Meng Y, Zhang C, Song Z. Multi-organ segmentation: a progressive exploration of learning paradigms under scarce annotation. Phys Med Biol 2024; 69:11TR01. [PMID: 38479023 DOI: 10.1088/1361-6560/ad33b5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 03/13/2024] [Indexed: 05/21/2024]
Abstract
Precise delineation of multiple organs or abnormal regions in the human body from medical images plays an essential role in computer-aided diagnosis, surgical simulation, image-guided interventions, and especially in radiotherapy treatment planning. Thus, it is of great significance to explore automatic segmentation approaches, among which deep learning-based approaches have evolved rapidly and witnessed remarkable progress in multi-organ segmentation. However, obtaining an appropriately sized and fine-grained annotated dataset of multiple organs is extremely hard and expensive. Such scarce annotation limits the development of high-performance multi-organ segmentation models but promotes many annotation-efficient learning paradigms. Among these, studies on transfer learning leveraging external datasets, semi-supervised learning including unannotated datasets and partially-supervised learning integrating partially-labeled datasets have led the dominant way to break such dilemmas in multi-organ segmentation. We first review the fully supervised method, then present a comprehensive and systematic elaboration of the 3 abovementioned learning paradigms in the context of multi-organ segmentation from both technical and methodological perspectives, and finally summarize their challenges and future trends.
Collapse
Affiliation(s)
- Shiman Li
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Haoran Wang
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Yucong Meng
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Chenxi Zhang
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai Key Lab of Medical Image Computing and Computer Assisted Intervention, Shanghai 200032, People's Republic of China
| |
Collapse
|
10
|
Zuo Q, Li R, Shi B, Hong J, Zhu Y, Chen X, Wu Y, Guo J. U-shaped convolutional transformer GAN with multi-resolution consistency loss for restoring brain functional time-series and dementia diagnosis. Front Comput Neurosci 2024; 18:1387004. [PMID: 38694950 PMCID: PMC11061376 DOI: 10.3389/fncom.2024.1387004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 04/02/2024] [Indexed: 05/04/2024] Open
Abstract
Introduction The blood oxygen level-dependent (BOLD) signal derived from functional neuroimaging is commonly used in brain network analysis and dementia diagnosis. Missing the BOLD signal may lead to bad performance and misinterpretation of findings when analyzing neurological disease. Few studies have focused on the restoration of brain functional time-series data. Methods In this paper, a novel U-shaped convolutional transformer GAN (UCT-GAN) model is proposed to restore the missing brain functional time-series data. The proposed model leverages the power of generative adversarial networks (GANs) while incorporating a U-shaped architecture to effectively capture hierarchical features in the restoration process. Besides, the multi-level temporal-correlated attention and the convolutional sampling in the transformer-based generator are devised to capture the global and local temporal features for the missing time series and associate their long-range relationship with the other brain regions. Furthermore, by introducing multi-resolution consistency loss, the proposed model can promote the learning of diverse temporal patterns and maintain consistency across different temporal resolutions, thus effectively restoring complex brain functional dynamics. Results We theoretically tested our model on the public Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, and our experiments demonstrate that the proposed model outperforms existing methods in terms of both quantitative metrics and qualitative assessments. The model's ability to preserve the underlying topological structure of the brain functional networks during restoration is a particularly notable achievement. Conclusion Overall, the proposed model offers a promising solution for restoring brain functional time-series and contributes to the advancement of neuroscience research by providing enhanced tools for disease analysis and interpretation.
Collapse
Affiliation(s)
- Qiankun Zuo
- Hubei Key Laboratory of Digital Finance Innovation, Hubei University of Economics, Wuhan, Hubei, China
- School of Information Engineering, Hubei University of Economics, Wuhan, Hubei, China
- Hubei Internet Finance Information Engineering Technology Research Center, Hubei University of Economics, Wuhan, Hubei, China
| | - Ruiheng Li
- Hubei Key Laboratory of Digital Finance Innovation, Hubei University of Economics, Wuhan, Hubei, China
- School of Information Engineering, Hubei University of Economics, Wuhan, Hubei, China
| | - Binghua Shi
- Hubei Key Laboratory of Digital Finance Innovation, Hubei University of Economics, Wuhan, Hubei, China
- School of Information Engineering, Hubei University of Economics, Wuhan, Hubei, China
| | - Jin Hong
- Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Yanfei Zhu
- School of Foreign Languages, Sun Yat-sen University, Guangzhou, China
| | - Xuhang Chen
- Faculty of Science and Technology, University of Macau, Taipa, Macao SAR, China
| | - Yixian Wu
- School of Mechanical Engineering, Beijing Institute of Petrochemical Technology, Beijing, China
| | - Jia Guo
- Hubei Key Laboratory of Digital Finance Innovation, Hubei University of Economics, Wuhan, Hubei, China
- School of Information Engineering, Hubei University of Economics, Wuhan, Hubei, China
- Hubei Internet Finance Information Engineering Technology Research Center, Hubei University of Economics, Wuhan, Hubei, China
| |
Collapse
|
11
|
Kumari S, Singh P. Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives. Comput Biol Med 2024; 170:107912. [PMID: 38219643 DOI: 10.1016/j.compbiomed.2023.107912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/02/2023] [Accepted: 12/24/2023] [Indexed: 01/16/2024]
Abstract
Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.
Collapse
Affiliation(s)
- Suruchi Kumari
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| | - Pravendra Singh
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| |
Collapse
|
12
|
Hossain MSA, Gul S, Chowdhury MEH, Khan MS, Sumon MSI, Bhuiyan EH, Khandakar A, Hossain M, Sadique A, Al-Hashimi I, Ayari MA, Mahmud S, Alqahtani A. Deep Learning Framework for Liver Segmentation from T1-Weighted MRI Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:8890. [PMID: 37960589 PMCID: PMC10650219 DOI: 10.3390/s23218890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 08/08/2023] [Accepted: 08/15/2023] [Indexed: 11/15/2023]
Abstract
The human liver exhibits variable characteristics and anatomical information, which is often ambiguous in radiological images. Machine learning can be of great assistance in automatically segmenting the liver in radiological images, which can be further processed for computer-aided diagnosis. Magnetic resonance imaging (MRI) is preferred by clinicians for liver pathology diagnosis over volumetric abdominal computerized tomography (CT) scans, due to their superior representation of soft tissues. The convenience of Hounsfield unit (HoU) based preprocessing in CT scans is not available in MRI, making automatic segmentation challenging for MR images. This study investigates multiple state-of-the-art segmentation networks for liver segmentation from volumetric MRI images. Here, T1-weighted (in-phase) scans are investigated using expert-labeled liver masks from a public dataset of 20 patients (647 MR slices) from the Combined Healthy Abdominal Organ Segmentation grant challenge (CHAOS). The reason for using T1-weighted images is that it demonstrates brighter fat content, thus providing enhanced images for the segmentation task. Twenty-four different state-of-the-art segmentation networks with varying depths of dense, residual, and inception encoder and decoder backbones were investigated for the task. A novel cascaded network is proposed to segment axial liver slices. The proposed framework outperforms existing approaches reported in the literature for the liver segmentation task (on the same test set) with a dice similarity coefficient (DSC) score and intersect over union (IoU) of 95.15% and 92.10%, respectively.
Collapse
Affiliation(s)
- Md. Sakib Abrar Hossain
- NSU Genome Research Institute (NGRI), North South University, Dhaka 1229, Bangladesh
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Sidra Gul
- Department of Computer Systems Engineering, University of Engineering and Technology Peshawar, Peshawar 25000, Pakistan
- Artificial Intelligence in Healthcare, IIPL, National Center of Artificial Intelligence, Peshawar 25000, Pakistan
| | | | | | | | - Enamul Haque Bhuiyan
- Center for Magnetic Resonance Research, University of Illinois Chicago, Chicago, IL 60607, USA
| | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Maqsud Hossain
- NSU Genome Research Institute (NGRI), North South University, Dhaka 1229, Bangladesh
| | - Abdus Sadique
- NSU Genome Research Institute (NGRI), North South University, Dhaka 1229, Bangladesh
| | | | | | - Sakib Mahmud
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Abdulrahman Alqahtani
- Department of Medical Equipment Technology, College of Applied, Medical Science, Majmaah University, Majmaah City 11952, Saudi Arabia
- Department of Biomedical Technology, College of Applied Medical Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| |
Collapse
|
13
|
Dong J, Cheng G, Zhang Y, Peng C, Song Y, Tong R, Lin L, Chen YW. Tailored multi-organ segmentation with model adaptation and ensemble. Comput Biol Med 2023; 166:107467. [PMID: 37725849 DOI: 10.1016/j.compbiomed.2023.107467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 08/10/2023] [Accepted: 09/04/2023] [Indexed: 09/21/2023]
Abstract
Multi-organ segmentation, which identifies and separates different organs in medical images, is a fundamental task in medical image analysis. Recently, the immense success of deep learning motivated its wide adoption in multi-organ segmentation tasks. However, due to expensive labor costs and expertise, the availability of multi-organ annotations is usually limited and hence poses a challenge in obtaining sufficient training data for deep learning-based methods. In this paper, we aim to address this issue by combining off-the-shelf single-organ segmentation models to develop a multi-organ segmentation model on the target dataset, which helps get rid of the dependence on annotated data for multi-organ segmentation. To this end, we propose a novel dual-stage method that consists of a Model Adaptation stage and a Model Ensemble stage. The first stage enhances the generalization of each off-the-shelf segmentation model on the target domain, while the second stage distills and integrates knowledge from multiple adapted single-organ segmentation models. Extensive experiments on four abdomen datasets demonstrate that our proposed method can effectively leverage off-the-shelf single-organ segmentation models to obtain a tailored model for multi-organ segmentation with high accuracy.
Collapse
Affiliation(s)
- Jiahua Dong
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Guohua Cheng
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Yue Zhang
- Center for Medical Imaging, Robotics, Analytic Computing & Learning (MIRACLE), Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, 215163, China; School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, 230026, China.
| | - Chengtao Peng
- Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, 230026, China
| | - Yu Song
- Graduate School of Information Science and Engineering, Ritsumeikan University, Shiga, 525-8577, Japan
| | - Ruofeng Tong
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Lanfen Lin
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Yen-Wei Chen
- Graduate School of Information Science and Engineering, Ritsumeikan University, Shiga, 525-8577, Japan
| |
Collapse
|
14
|
Viknesh CK, Kumar PN, Seetharaman R, Anitha D. Detection and Classification of Melanoma Skin Cancer Using Image Processing Technique. Diagnostics (Basel) 2023; 13:3313. [PMID: 37958209 PMCID: PMC10649387 DOI: 10.3390/diagnostics13213313] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 10/17/2023] [Accepted: 10/20/2023] [Indexed: 11/15/2023] Open
Abstract
Human skin cancer is the most common and potentially life-threatening form of cancer. Melanoma skin cancer, in particular, exhibits a high mortality rate. Early detection is crucial for effective treatment. Traditionally, melanoma is detected through painful and time-consuming biopsies. This research introduces a computer-aided detection technique for early melanoma diagnosis-sis. In this study, we propose two methods for detecting skin cancer and focus specifically on melanoma cancerous cells using image data. The first method employs convolutional neural networks, including AlexNet, LeNet, and VGG-16 models, and we integrate the model with the highest accuracy into web and mobile applications. We also investigate the relationship between model depth and performance with varying dataset sizes. The second method uses support vector machines with a default RBF kernel, using feature parameters to categorize images as benign, malignant, or normal after image processing. The SVM classifier achieved an 86.6% classification accuracy, while the CNN maintained a 91% accuracy rate after 100 compute epochs. The CNN model is deployed as a web and mobile application with the assistance of Django and Android Studio.
Collapse
Affiliation(s)
- Chandran Kaushik Viknesh
- Department of Electronics and Communication Engineering, College of Engineering Guindy Campus, Anna University, Chennai 600025, India; (P.N.K.); (R.S.)
| | - Palanisamy Nirmal Kumar
- Department of Electronics and Communication Engineering, College of Engineering Guindy Campus, Anna University, Chennai 600025, India; (P.N.K.); (R.S.)
| | - Ramasamy Seetharaman
- Department of Electronics and Communication Engineering, College of Engineering Guindy Campus, Anna University, Chennai 600025, India; (P.N.K.); (R.S.)
| | - Devasahayam Anitha
- Department of Science and Humanities, Karpagam Institute of Technology, Coimbatore 641105, India;
| |
Collapse
|
15
|
Wang R, Zhou Q, Zheng G. EDRL: Entropy-guided disentangled representation learning for unsupervised domain adaptation in semantic segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107729. [PMID: 37531690 DOI: 10.1016/j.cmpb.2023.107729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 07/15/2023] [Accepted: 07/19/2023] [Indexed: 08/04/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning-based approaches are excellent at learning from large amounts of data, but can be poor at generalizing the learned knowledge to testing datasets with domain shift, i.e., when there exists distribution discrepancy between the training dataset (source domain) and the testing dataset (target domain). In this paper, we investigate unsupervised domain adaptation (UDA) techniques to train a cross-domain segmentation method which is robust to domain shift, eliminating the requirement of any annotations on the target domain. METHODS To this end, we propose an Entropy-guided Disentangled Representation Learning, referred as EDRL, for UDA in semantic segmentation. Concretely, we synergistically integrate image alignment via disentangled representation learning with feature alignment via entropy-based adversarial learning into one network, which is trained end-to-end. We additionally introduce a dynamic feature selection mechanism via soft gating, which helps to further enhance the task-specific feature alignment. We validate the proposed method on two publicly available datasets: the CT-MR dataset and the multi-sequence cardiac MR (MS-CMR) dataset. RESULTS On both datasets, our method achieved better results than the state-of-the-art (SOTA) methods. Specifically, on the CT-MR dataset, our method achieved an average DSC of 84.8% when taking CT as the source domain and MR as the target domain, and an average DSC of 84.0% when taking MR as the source domain and CT as the target domain. CONCLUSIONS Results from comprehensive experiments demonstrate the efficacy of the proposed EDRL model for cross-domain medical image segmentation.
Collapse
Affiliation(s)
- Runze Wang
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, 200240, China
| | - Qin Zhou
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, 200240, China
| | - Guoyan Zheng
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, No. 800, Dongchuan Road, Shanghai, 200240, China.
| |
Collapse
|
16
|
Zhang Y, Hong J. Challenges of Deep Learning in Cancers. Technol Cancer Res Treat 2023; 22:15330338231173495. [PMID: 37113071 PMCID: PMC10150420 DOI: 10.1177/15330338231173495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/29/2023] Open
Affiliation(s)
- Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, UK
| | - Jin Hong
- Brain Information and Human Factors Engineering Laboratory, Zhongshan Institute of Changchun University of Science and Technology, Zhongshan, China
| |
Collapse
|
17
|
Zuo Q, Lu L, Wang L, Zuo J, Ouyang T. Constructing brain functional network by Adversarial Temporal-Spatial Aligned Transformer for early AD analysis. Front Neurosci 2022; 16:1087176. [PMID: 36518529 PMCID: PMC9742604 DOI: 10.3389/fnins.2022.1087176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 11/10/2022] [Indexed: 09/19/2023] Open
Abstract
Introduction The brain functional network can describe the spontaneous activity of nerve cells and reveal the subtle abnormal changes associated with brain disease. It has been widely used for analyzing early Alzheimer's disease (AD) and exploring pathological mechanisms. However, the current methods of constructing functional connectivity networks from functional magnetic resonance imaging (fMRI) heavily depend on the software toolboxes, which may lead to errors in connection strength estimation and bad performance in disease analysis because of many subjective settings. Methods To solve this problem, in this paper, a novel Adversarial Temporal-Spatial Aligned Transformer (ATAT) model is proposed to automatically map 4D fMRI into functional connectivity network for early AD analysis. By incorporating the volume and location of anatomical brain regions, the region-guided feature learning network can roughly focus on local features for each brain region. Also, the spatial-temporal aligned transformer network is developed to adaptively adjust boundary features of adjacent regions and capture global functional connectivity patterns of distant regions. Furthermore, a multi-channel temporal discriminator is devised to distinguish the joint distributions of the multi-region time series from the generator and the real sample. Results Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) proved the effectiveness and superior performance of the proposed model in early AD prediction and progression analysis. Discussion To verify the reliability of the proposed model, the detected important ROIs are compared with clinical studies and show partial consistency. Furthermore, the most significant altered connectivity reflects the main characteristics associated with AD. Conclusion Generally, the proposed ATAT provides a new perspective in constructing functional connectivity networks and is able to evaluate the disease-related changing characteristics at different stages for neuroscience exploration and clinical disease analysis.
Collapse
Affiliation(s)
- Qiankun Zuo
- School of Information Engineering, Hubei University of Economics, Wuhan, China
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and the SIAT Branch, Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen, China
| | - Libin Lu
- School of Mathematics and Computer Science, Wuhan Polytechnic University, Wuhan, China
| | - Lin Wang
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and the SIAT Branch, Shenzhen Institute of Artificial Intelligence and Robotics for Society, Shenzhen, China
- Guangdong-Hong Kong-Macau Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen, China
| | - Jiahui Zuo
- State Key Laboratory of Petroleum Resource and Prospecting, and Unconventional Petroleum Research Institute, China University of Petroleum, Beijing, China
| | - Tao Ouyang
- State Key Laboratory of Geomechanics and Geotechnical Engineering, Institute of Rock and Soil Mechanics, Chinese Academy of Sciences, Wuhan, China
| |
Collapse
|
18
|
Gao W, Xu C, Li G, Zhang Y, Bai N, Li M. Cervical Cell Image Classification-Based Knowledge Distillation. Biomimetics (Basel) 2022; 7:biomimetics7040195. [PMID: 36412723 PMCID: PMC9680356 DOI: 10.3390/biomimetics7040195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 11/03/2022] [Accepted: 11/05/2022] [Indexed: 11/12/2022] Open
Abstract
Current deep-learning-based cervical cell classification methods suffer from parameter redundancy and poor model generalization performance, which creates challenges for the intelligent classification of cervical cytology smear images. In this paper, we establish a method for such classification that combines transfer learning and knowledge distillation. This new method not only transfers common features between different source domain data, but also realizes model-to-model knowledge transfer using the unnormalized probability output between models as knowledge. A multi-exit classification network is then introduced as the student network, where a global context module is embedded in each exit branch. A self-distillation method is then proposed to fuse contextual information; deep classifiers in the student network guide shallow classifiers to learn, and multiple classifier outputs are fused using an average integration strategy to form a classifier with strong generalization performance. The experimental results show that the developed method achieves good results using the SIPaKMeD dataset. The accuracy, sensitivity, specificity, and F-measure of the five classifications are 98.52%, 98.53%, 98.68%, 98.59%, respectively. The effectiveness of the method is further verified on a natural image dataset.
Collapse
Affiliation(s)
- Wenjian Gao
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| | - Chuanyun Xu
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
- College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
- Correspondence: (C.X.); (G.L.)
| | - Gang Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
- Correspondence: (C.X.); (G.L.)
| | - Yang Zhang
- College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
| | - Nanlan Bai
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| | - Mengwei Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 400054, China
| |
Collapse
|
19
|
Gomes R, Kamrowski C, Mohan PD, Senor C, Langlois J, Wildenberg J. Application of Deep Learning to IVC Filter Detection from CT Scans. Diagnostics (Basel) 2022; 12:diagnostics12102475. [PMID: 36292164 PMCID: PMC9600884 DOI: 10.3390/diagnostics12102475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Revised: 10/05/2022] [Accepted: 10/10/2022] [Indexed: 11/16/2022] Open
Abstract
IVC filters (IVCF) perform an important function in select patients that have venous blood clots. However, they are usually intended to be temporary, and significant delay in removal can have negative health consequences for the patient. Currently, all Interventional Radiology (IR) practices are tasked with tracking patients in whom IVCF are placed. Due to their small size and location deep within the abdomen it is common for patients to forget that they have an IVCF. Therefore, there is a significant delay for a new healthcare provider to become aware of the presence of a filter. Patients may have an abdominopelvic CT scan for many reasons and, fortunately, IVCF are clearly visible on these scans. In this research a deep learning model capable of segmenting IVCF from CT scan slices along the axial plane is developed. The model achieved a Dice score of 0.82 for training over 372 CT scan slices. The segmentation model is then integrated with a prediction algorithm capable of flagging an entire CT scan as having IVCF. The prediction algorithm utilizing the segmentation model achieved a 92.22% accuracy at detecting IVCF in the scans.
Collapse
Affiliation(s)
- Rahul Gomes
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA
- Correspondence: (R.G.); (J.W.)
| | - Connor Kamrowski
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA
| | - Pavithra Devy Mohan
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA
| | - Cameron Senor
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA
| | - Jordan Langlois
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA
| | - Joseph Wildenberg
- Interventional Radiology, Mayo Clinic Health System, Eau Claire, WI 54703, USA
- Correspondence: (R.G.); (J.W.)
| |
Collapse
|