1
|
Zhu E, Feng H, Chen L, Lai Y, Chai S. MP-Net: A Multi-Center Privacy-Preserving Network for Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2718-2729. [PMID: 38478456 DOI: 10.1109/tmi.2024.3377248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
In this paper, we present the Multi-Center Privacy-Preserving Network (MP-Net), a novel framework designed for secure medical image segmentation in multi-center collaborations. Our methodology offers a new approach to multi-center collaborative learning, capable of reducing the volume of data transmission and enhancing data privacy protection. Unlike federated learning, which requires the transmission of model data between the central server and local servers in each round, our method only necessitates a single transfer of encrypted data. The proposed MP-Net comprises a three-layer model, consisting of encryption, segmentation, and decryption networks. We encrypt the image data into ciphertext using an encryption network and introduce an improved U-Net for image ciphertext segmentation. Finally, the segmentation mask is obtained through a decryption network. This architecture enables ciphertext-based image segmentation through computable image encryption. We evaluate the effectiveness of our approach on three datasets, including two cardiac MRI datasets and a CTPA dataset. Our results demonstrate that the MP-Net can securely utilize data from multiple centers to establish a more robust and information-rich segmentation model.
Collapse
|
2
|
Du H, Wang J, Liu M, Wang Y, Meijering E. SwinPA-Net: Swin Transformer-Based Multiscale Feature Pyramid Aggregation Network for Medical Image Segmentation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:5355-5366. [PMID: 36121961 DOI: 10.1109/tnnls.2022.3204090] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The precise segmentation of medical images is one of the key challenges in pathology research and clinical practice. However, many medical image segmentation tasks have problems such as large differences between different types of lesions and similar shapes as well as colors between lesions and surrounding tissues, which seriously affects the improvement of segmentation accuracy. In this article, a novel method called Swin Pyramid Aggregation network (SwinPA-Net) is proposed by combining two designed modules with Swin Transformer to learn more powerful and robust features. The two modules, named dense multiplicative connection (DMC) module and local pyramid attention (LPA) module, are proposed to aggregate the multiscale context information of medical images. The DMC module cascades the multiscale semantic feature information through dense multiplicative feature fusion, which minimizes the interference of shallow background noise to improve the feature expression and solves the problem of excessive variation in lesion size and type. Moreover, the LPA module guides the network to focus on the region of interest by merging the global attention and the local attention, which helps to solve similar problems. The proposed network is evaluated on two public benchmark datasets for polyp segmentation task and skin lesion segmentation task as well as a clinical private dataset for laparoscopic image segmentation task. Compared with existing state-of-the-art (SOTA) methods, the SwinPA-Net achieves the most advanced performance and can outperform the second-best method on the mean Dice score by 1.68%, 0.8%, and 1.2% on the three tasks, respectively.
Collapse
|
3
|
Chen R, Liu M, Chen W, Wang Y, Meijering E. Deep learning in mesoscale brain image analysis: A review. Comput Biol Med 2023; 167:107617. [PMID: 37918261 DOI: 10.1016/j.compbiomed.2023.107617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/23/2023] [Indexed: 11/04/2023]
Abstract
Mesoscale microscopy images of the brain contain a wealth of information which can help us understand the working mechanisms of the brain. However, it is a challenging task to process and analyze these data because of the large size of the images, their high noise levels, the complex morphology of the brain from the cellular to the regional and anatomical levels, the inhomogeneous distribution of fluorescent labels in the cells and tissues, and imaging artifacts. Due to their impressive ability to extract relevant information from images, deep learning algorithms are widely applied to microscopy images of the brain to address these challenges and they perform superiorly in a wide range of microscopy image processing and analysis tasks. This article reviews the applications of deep learning algorithms in brain mesoscale microscopy image processing and analysis, including image synthesis, image segmentation, object detection, and neuron reconstruction and analysis. We also discuss the difficulties of each task and possible directions for further research.
Collapse
Affiliation(s)
- Runze Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Min Liu
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China; Research Institute of Hunan University in Chongqing, Chongqing, 401135, China.
| | - Weixun Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Yaonan Wang
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney 2052, New South Wales, Australia
| |
Collapse
|
4
|
Meng L, Shang X, Gao F, Li D. Comparative study of imaging staging and postoperative pathological staging of esophageal cancer based on smart medical big data. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:10514-10529. [PMID: 37322946 DOI: 10.3934/mbe.2023464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Esophageal cancer has become a malignant tumor disease with high mortality worldwide. Many cases of esophageal cancer are not very serious in the beginning but become severe in the late stage, so the best treatment time is missed. Less than 20% of patients with esophageal cancer are in the late stage of the disease for 5 years. The main treatment method is surgery, which is assisted by radiotherapy and chemotherapy. Radical resection is the most effective treatment method, but a method for imaging examination of esophageal cancer with good clinical effect has yet to be developed. This study compared imaging staging of esophageal cancer with pathological staging after operation based on the big data of intelligent medical treatment. MRI can be used to evaluate the depth of esophageal cancer invasion and replace CT and EUS for accurate diagnosis of esophageal cancer. Intelligent medical big data, medical document preprocessing, MRI imaging principal component analysis and comparison and esophageal cancer pathological staging experiments were used. Kappa consistency tests were conducted to compare the consistency between MRI staging and pathological staging and between two observers. Sensitivity, specificity and accuracy were determined to evaluate the diagnostic effectiveness of 3.0T MRI accurate staging. Results showed that 3.0T MR high-resolution imaging could show the histological stratification of the normal esophageal wall. The sensitivity, specificity and accuracy of high-resolution imaging in staging and diagnosis of isolated esophageal cancer specimens reached 80%. At present, preoperative imaging methods for esophageal cancer have obvious limitations, while CT and EUS have certain limitations. Therefore, non-invasive preoperative imaging examination of esophageal cancer should be further explored.Esophageal cancer has become a malignant tumor disease with high mortality worldwide. Many cases of esophageal cancer are not very serious in the beginning but become severe in the late stage, so the best treatment time is missed. Less than 20% of patients with esophageal cancer are in the late stage of the disease for 5 years. The main treatment method is surgery, which is assisted by radiotherapy and chemotherapy. Radical resection is the most effective treatment method, but a method for imaging examination of esophageal cancer with good clinical effect has yet to be developed. This study compared imaging staging of esophageal cancer with pathological staging after operation based on the big data of intelligent medical treatment. MRI can be used to evaluate the depth of esophageal cancer invasion and replace CT and EUS for accurate diagnosis of esophageal cancer. Intelligent medical big data, medical document preprocessing, MRI imaging principal component analysis and comparison and esophageal cancer pathological staging experiments were used. Kappa consistency tests were conducted to compare the consistency between MRI staging and pathological staging and between two observers. Sensitivity, specificity and accuracy were determined to evaluate the diagnostic effectiveness of 3.0T MRI accurate staging. Results showed that 3.0T MR high-resolution imaging could show the histological stratification of the normal esophageal wall. The sensitivity, specificity and accuracy of high-resolution imaging in staging and diagnosis of isolated esophageal cancer specimens reached 80%. At present, preoperative imaging methods for esophageal cancer have obvious limitations, while CT and EUS have certain limitations. Therefore, non-invasive preoperative imaging examination of esophageal cancer should be further explored.
Collapse
Affiliation(s)
- Linglei Meng
- Department CT/MR, People's Hospital of Xing Tai, Xing Tai 054000, Hebei, China
| | - XinFang Shang
- Department CT/MR, People's Hospital of Xing Tai, Xing Tai 054000, Hebei, China
| | - FengXiao Gao
- Department CT/MR, People's Hospital of Xing Tai, Xing Tai 054000, Hebei, China
| | - DeMao Li
- Department chest surgery, People's Hospital of Xing Tai, Xing Tai 054000, Hebei, China
| |
Collapse
|
5
|
Liu Y, Wang G, Ascoli GA, Zhou J, Liu L. Neuron tracing from light microscopy images: automation, deep learning and bench testing. Bioinformatics 2022; 38:5329-5339. [PMID: 36303315 PMCID: PMC9750132 DOI: 10.1093/bioinformatics/btac712] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/19/2022] [Accepted: 10/26/2022] [Indexed: 12/24/2022] Open
Abstract
MOTIVATION Large-scale neuronal morphologies are essential to neuronal typing, connectivity characterization and brain modeling. It is widely accepted that automation is critical to the production of neuronal morphology. Despite previous survey papers about neuron tracing from light microscopy data in the last decade, thanks to the rapid development of the field, there is a need to update recent progress in a review focusing on new methods and remarkable applications. RESULTS This review outlines neuron tracing in various scenarios with the goal to help the community understand and navigate tools and resources. We describe the status, examples and accessibility of automatic neuron tracing. We survey recent advances of the increasingly popular deep-learning enhanced methods. We highlight the semi-automatic methods for single neuron tracing of mammalian whole brains as well as the resulting datasets, each containing thousands of full neuron morphologies. Finally, we exemplify the commonly used datasets and metrics for neuron tracing bench testing.
Collapse
Affiliation(s)
- Yufeng Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Gaoyu Wang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Jiangning Zhou
- Institute of Brain Science, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lijuan Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
6
|
Zhang X, Du H, Song G, Bao F, Zhang Y, Wu W, Liu P. X-ray coronary centerline extraction based on C-UNet and a multifactor reconnection algorithm. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107114. [PMID: 36116399 DOI: 10.1016/j.cmpb.2022.107114] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 08/31/2022] [Accepted: 09/04/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate extraction of the coronary artery centerline is crucial in the processes of coronary artery reconstruction, coronary artery stenosis or lesion detection, and surgical navigation. Furthermore, in clinical medicine, the complex background of angiography, low signal-to-noise ratio, and complex vascular structure make coronary artery centerline extraction challenging. In this study, a direct centerline extraction method is proposed that automatically and accurately extracts vascular centerlines from X-ray coronary angiography images based on deep learning and conventional methods. METHODS In this study, a coronary artery centerline extraction method is proposed that comprises two parts: the preliminary centerline extraction network based on U-Net with a residual network, called C-UNet, and the multifactor centerline reconnection algorithm based on the geometric characteristics of blood vessels. RESULTS The qualitative and quantitative results demonstrate the effectiveness of the presented method. In this study, three widely used evaluation indices were adopted to evaluate the performance of the method: precision, recall, and F1_Score. The experimental results show that this method can accurately extract coronary artery centerlines. CONCLUSIONS The proposed centerline extraction method accurately extracts centerlines from X-ray coronary angiography images and improves both the accuracy and continuity of centerline extraction.
Collapse
Affiliation(s)
- Xinyue Zhang
- School of Mathematics, Shandong University, Jinan, Shandong 250100, China
| | - Hongwei Du
- School of Mathematics, Shandong University, Jinan, Shandong 250100, China
| | - Gang Song
- School of Mathematics, Shandong University, Jinan, Shandong 250100, China
| | - Fangxun Bao
- School of Mathematics, Shandong University, Jinan, Shandong 250100, China.
| | - Yunfeng Zhang
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, Shandong 250014, China
| | - Wei Wu
- Department of Neurology, Qi-Lu Hospital of Shandong University, Jinan, Shandong 250012, China
| | - Peide Liu
- School of Management Science and Engineering, Shandong University of Finance and Economics, Jinan 250014, China
| |
Collapse
|