1
|
Jian Z, Song T, Zhang Z, Ai Z, Zhao H, Tang M, Liu K. An Improved Nested U-Net Network for Fluorescence In Situ Hybridization Cell Image Segmentation. SENSORS (BASEL, SWITZERLAND) 2024; 24:928. [PMID: 38339644 PMCID: PMC10857237 DOI: 10.3390/s24030928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 01/26/2024] [Accepted: 01/28/2024] [Indexed: 02/12/2024]
Abstract
Fluorescence in situ hybridization (FISH) is a powerful cytogenetic method used to precisely detect and localize nucleic acid sequences. This technique is proving to be an invaluable tool in medical diagnostics and has made significant contributions to biology and the life sciences. However, the number of cells is large and the nucleic acid sequences are disorganized in the FISH images taken using the microscope. Processing and analyzing images is a time-consuming and laborious task for researchers, as it can easily tire the human eyes and lead to errors in judgment. In recent years, deep learning has made significant progress in the field of medical imaging, especially the successful application of introducing the attention mechanism. The attention mechanism, as a key component of deep learning, improves the understanding and interpretation of medical images by giving different weights to different regions of the image, enabling the model to focus more on important features. To address the challenges in FISH image analysis, we combined medical imaging with deep learning to develop the SEAM-Unet++ automated cell contour segmentation algorithm with integrated attention mechanism. The significant advantage of this algorithm is that it improves the accuracy of cell contours in FISH images. Experiments have demonstrated that by introducing the attention mechanism, our method is able to segment cells that are adherent to each other more efficiently.
Collapse
Affiliation(s)
| | | | | | | | | | - Man Tang
- School of Electronic and Electrical Engineering, Wuhan Textile University, Wuhan 430200, China; (Z.J.); (T.S.); (Z.Z.); (Z.A.); (H.Z.)
| | - Kan Liu
- School of Electronic and Electrical Engineering, Wuhan Textile University, Wuhan 430200, China; (Z.J.); (T.S.); (Z.Z.); (Z.A.); (H.Z.)
| |
Collapse
|
2
|
Küçükçiloğlu Y, Şekeroğlu B, Adalı T, Şentürk N. Prediction of osteoporosis using MRI and CT scans with unimodal and multimodal deep-learning models. Diagn Interv Radiol 2024; 30:9-20. [PMID: 37309886 PMCID: PMC10773174 DOI: 10.4274/dir.2023.232116] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 05/06/2023] [Indexed: 06/14/2023]
Abstract
PURPOSE Osteoporosis is the systematic degeneration of the human skeleton, with consequences ranging from a reduced quality of life to mortality. Therefore, the prediction of osteoporosis reduces risks and supports patients in taking precautions. Deep-learning and specific models achieve highly accurate results using different imaging modalities. The primary purpose of this research was to develop unimodal and multimodal deep-learning-based diagnostic models to predict bone mineral loss of the lumbar vertebrae using magnetic resonance (MR) and computed tomography (CT) imaging. METHODS Patients who received both lumbar dual-energy X-ray absorptiometry (DEXA) and MRI (n = 120) or CT (n = 100) examinations were included in this study. Unimodal and multimodal convolutional neural networks (CNNs) with dual blocks were proposed to predict osteoporosis using lumbar vertebrae MR and CT examinations in separate and combined datasets. Bone mineral density values obtained by DEXA were used as reference data. The proposed models were compared with a CNN model and six benchmark pre-trained deep-learning models. RESULTS The proposed unimodal model obtained 96.54%, 98.84%, and 96.76% balanced accuracy for MRI, CT, and combined datasets, respectively, while the multimodal model achieved 98.90% balanced accuracy in 5-fold cross-validation experiments. Furthermore, the models obtained 95.68%-97.91% accuracy with a hold-out validation dataset. In addition, comparative experiments demonstrated that the proposed models yielded superior results by providing more effective feature extraction in dual blocks to predict osteoporosis. CONCLUSION This study demonstrated that osteoporosis was accurately predicted by the proposed models using both MR and CT images, and a multimodal approach improved the prediction of osteoporosis. With further research involving prospective studies with a larger number of patients, there may be an opportunity to implement these technologies into clinical practice.
Collapse
Affiliation(s)
- Yasemin Küçükçiloğlu
- Near East University Faculty of Medicine, Department of Radiology, Nicosia, Cyprus
- Near East University, Center of Excellence, Tissue Engineering and Biomaterials Research Center, Nicosia, Cyprus
| | - Boran Şekeroğlu
- Near East University, Applied Artificial Intelligence Research Center, Nicosia, Cyprus
| | - Terin Adalı
- Near East University, Center of Excellence, Tissue Engineering and Biomaterials Research Center, Nicosia, Cyprus
- Near East University Faculty of Engineering, Department of Biomedical Engineering, Nicosia, Cyprus
- Sabancı University, Nanotechnology Research and Application Center, İstanbul, Turkey
| | - Niyazi Şentürk
- Near East University, Center of Excellence, Tissue Engineering and Biomaterials Research Center, Nicosia, Cyprus
- Near East University Faculty of Engineering, Department of Biomedical Engineering, Nicosia, Cyprus
| |
Collapse
|
3
|
Safari M, Fatemi A, Archambault L. MedFusionGAN: multimodal medical image fusion using an unsupervised deep generative adversarial network. BMC Med Imaging 2023; 23:203. [PMID: 38062431 PMCID: PMC10704723 DOI: 10.1186/s12880-023-01160-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 11/23/2023] [Indexed: 12/18/2023] Open
Abstract
PURPOSE This study proposed an end-to-end unsupervised medical fusion generative adversarial network, MedFusionGAN, to fuse computed tomography (CT) and high-resolution isotropic 3D T1-Gd Magnetic resonance imaging (MRI) image sequences to generate an image with CT bone structure and MRI soft tissue contrast to improve target delineation and to reduce the radiotherapy planning time. METHODS We used a publicly available multicenter medical dataset (GLIS-RT, 230 patients) from the Cancer Imaging Archive. To improve the models generalization, we consider different imaging protocols and patients with various brain tumor types, including metastases. The proposed MedFusionGAN consisted of one generator network and one discriminator network trained in an adversarial scenario. Content, style, and L1 losses were used for training the generator to preserve the texture and structure information of the MRI and CT images. RESULTS The MedFusionGAN successfully generates fused images with MRI soft-tissue and CT bone contrast. The results of the MedFusionGAN were quantitatively and qualitatively compared with seven traditional and eight deep learning (DL) state-of-the-art methods. Qualitatively, our method fused the source images with the highest spatial resolution without adding the image artifacts. We reported nine quantitative metrics to quantify the preservation of structural similarity, contrast, distortion level, and image edges in fused images. Our method outperformed both traditional and DL methods on six out of nine metrics. And it got the second performance rank for three and two quantitative metrics when compared with traditional and DL methods, respectively. To compare soft-tissue contrast, intensity profile along tumor and tumor contours of the fusion methods were evaluated. MedFusionGAN provides a more consistent, better intensity profile, and a better segmentation performance. CONCLUSIONS The proposed end-to-end unsupervised method successfully fused MRI and CT images. The fused image could improve targets and OARs delineation, which is an important aspect of radiotherapy treatment planning.
Collapse
Affiliation(s)
- Mojtaba Safari
- Département de Physique, de génie Physique et d'Optique, et Centre de Recherche sur le Cancer, Université Laval, Québec City, QC, Canada.
- Service de Physique Médicale et Radioprotection, Centre Intégré de Cancérologie, CHU de Québec - Université Laval et Centre de recherche du CHU de Québec, Québec City, QC, Canada.
| | - Ali Fatemi
- Department of Physics, Jackson State University, Jackson, MS, USA
- Department of Radiation Oncology, Gamma Knife Center, Merit Health Central, Jackson, MS, USA
| | - Louis Archambault
- Département de Physique, de génie Physique et d'Optique, et Centre de Recherche sur le Cancer, Université Laval, Québec City, QC, Canada
- Service de Physique Médicale et Radioprotection, Centre Intégré de Cancérologie, CHU de Québec - Université Laval et Centre de recherche du CHU de Québec, Québec City, QC, Canada
| |
Collapse
|
4
|
Haribabu M, Guruviah V. Enhanced multimodal medical image fusion based on Pythagorean fuzzy set: an innovative approach. Sci Rep 2023; 13:16726. [PMID: 37794125 PMCID: PMC10550958 DOI: 10.1038/s41598-023-43873-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 09/29/2023] [Indexed: 10/06/2023] Open
Abstract
Medical image fusion is the process of combining a multi-modality image into a single output image for superior information and a better visual appearance without any vagueness or uncertainties. It is suitable for better diagnosis. Pythagorean fuzzy set (PFS)-based medical image fusion was proposed in this manuscript. In the first phase, a two-scale gaussian filter was used to decompose the source images into base and detail layers. In the second phase, a spatial frequency (SF)-based fusion rule was employed for detail layers to preserve the more edge-oriented details. However, the base layer images were converted into pythagorean fuzzy images (PFIs) using the optimum value obtained by pythagorean fuzzy entropy (PFE). The blackness and whiteness count fusion rule were performed for image blocks decomposed from two PFIs in the third phase. Finally, the enhanced fused image was obtained by reconstructions of fused PFI blocks, which performed the defuzzification process. The proposed method was evaluated on different datasets for disease diagnosis and achieved better mean (M), standard deviation (SD), average gradient (AG), SF, modified spatial frequency (MSF), mutual information (MI), and fusion symmetry (FS) values than compared to state-of-art methods. This advancement is important in the field of healthcare and medical imaging, including enhanced diagnostics and treatment planning.
Collapse
Affiliation(s)
- Maruturi Haribabu
- School of Electronics Engineering, Vellore Institute of Technology, Chennai, India
| | - Velmathi Guruviah
- School of Electronics Engineering, Vellore Institute of Technology, Chennai, India.
| |
Collapse
|
5
|
Zhao Y, Guo Q, Zhang Y, Zheng J, Yang Y, Du X, Feng H, Zhang S. Application of Deep Learning for Prediction of Alzheimer's Disease in PET/MR Imaging. Bioengineering (Basel) 2023; 10:1120. [PMID: 37892850 PMCID: PMC10604050 DOI: 10.3390/bioengineering10101120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 09/19/2023] [Accepted: 09/22/2023] [Indexed: 10/29/2023] Open
Abstract
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that affects millions of people worldwide. Positron emission tomography/magnetic resonance (PET/MR) imaging is a promising technique that combines the advantages of PET and MR to provide both functional and structural information of the brain. Deep learning (DL) is a subfield of machine learning (ML) and artificial intelligence (AI) that focuses on developing algorithms and models inspired by the structure and function of the human brain's neural networks. DL has been applied to various aspects of PET/MR imaging in AD, such as image segmentation, image reconstruction, diagnosis and prediction, and visualization of pathological features. In this review, we introduce the basic concepts and types of DL algorithms, such as feed forward neural networks, convolutional neural networks, recurrent neural networks, and autoencoders. We then summarize the current applications and challenges of DL in PET/MR imaging in AD, and discuss the future directions and opportunities for automated diagnosis, predictions of models, and personalized medicine. We conclude that DL has great potential to improve the quality and efficiency of PET/MR imaging in AD, and to provide new insights into the pathophysiology and treatment of this devastating disease.
Collapse
Affiliation(s)
- Yan Zhao
- Department of Information Center, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| | - Qianrui Guo
- Department of Nuclear Medicine, Beijing Cancer Hospital, Beijing 100142, China;
| | - Yukun Zhang
- Department of Radiology, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| | - Jia Zheng
- Department of Nuclear Medicine, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| | - Yang Yang
- Beijing United Imaging Research Institute of Intelligent Imaging, Beijing 100094, China
| | - Xuemei Du
- Department of Nuclear Medicine, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| | - Hongbo Feng
- Department of Nuclear Medicine, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| | - Shuo Zhang
- Department of Nuclear Medicine, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
| |
Collapse
|
6
|
Lu X, Jiang M, Lin MH. Diagnostic Value of Convolutional Neural Network Algorithm and High-Sensitivity Cardiac Troponin I Detection Under Machine Learning in Myocardial Infarction. J Biomed Nanotechnol 2022. [DOI: 10.1166/jbn.2022.3474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
Abstract
Background: It aimed to the diagnosis and examination of acute myocardial infarction (AMI) using echocardiography under improved convolutional neural network (CNN) algorithm and high-sensitivity (Sen) cardiac troponin I (hs-cTnI) detection. The application effect was also evaluated.
Methods: Ninety AMI patients were recruited as the AMI group, and ninety healthy individuals who underwent physical examinations simultaneously were chosen as control (Ctrl) group. Improved CNN algorithm-based echocardiography combined with hs-cTnI detection was applied, and its diagnostic
efficiency was evaluated. Results: The optimal dataset scale (ODS), optimal image scale (OIS) and average precision (AP) of the proposed algorithm were better than those of manual labeling, Canny algorithm, and structured edge (SE) algorithm (P < 0.05). The left ventricular
ejection fraction (LVEF) of the patients in the AMI group was inferior to that of Ctrl group ((55.09±2.78)%) versus (65.01±3.19)%), the left ventricular end-diastolic dimension (LVEDD) was superior to that of Ctrl group ((54.89±6.56) mm vs. (45.98±5.77) mm), and
the cTnI level was also superior to that of Ctrl group ((2.90±0.31) pg/L vs. (0.73±0.42) pg/L) (P < 0.05). The diagnostic Sen (91.89%), specificity (Spe) (81.25%), accuracy (Acc) (90.00%) and consistency (0.56) of echocardiography combined with hs-cTnI were superior
to those of single echocardiography or cTnI detection (P < 0.05).
Collapse
|
7
|
Wang J, Zhu Q, Zhang S, Wen L, Wang L. Observation of Clinical Efficacy of Anisodamine and Chlorpromazine in the Treatment of Intractable Hiccup after Stroke. BIOMED RESEARCH INTERNATIONAL 2022; 2022:6563193. [PMID: 35915796 PMCID: PMC9338746 DOI: 10.1155/2022/6563193] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 04/21/2022] [Accepted: 04/22/2022] [Indexed: 11/18/2022]
Abstract
Objective This study is aimed at investigating the clinical efficacy of anisodamine combined with chlorpromazine on intractable hiccups after stroke. Methods 150 patients admitted to Affiliated Hospital of the Hebei University of Engineering from 2017 to 2021 were selected as the research objects, all of which received the computed tomography (CT) examination. During CT examination, intelligent algorithms were used to segment the images. An unsupervised multilayer image threshold segmentation algorithm was proposed by using Kullback-Leibler (K-L) divergence and the modified particle swarm optimization (MPSO) algorithm. All patients were divided into three groups, with each group of 50 patients. Patients in the control group (group A) took the calcium tablets, vitamin C tablets, and vitamin B1 tablets orally. Patients in the control group (group B) received the acupoint injection of anisodamine, and those in the observation group (group C) received the acupoint injection of anisodamine combined with chlorpromazine. The therapeutic effect and patient satisfaction of the three groups were compared. Results The two-dimensional (2D) K-L divergence was applied for the multilayer segmentation of images, which was helpful to obtain accurate images. The MPSO algorithm was adopted to reduce the computational complexity. The total efficiency of group C was 98%, that of group B was 56%, and that of group A was 22%. The total efficiency and satisfaction rate of group C were signally better than those of group A and group B (P < 0.05). Conclusion The combination of 2D K-L divergence and MPSO algorithm could improve the accuracy of multilayer image segmentation and CT imaging. Acupoint injection of anisodamine combined with chlorpromazine had better efficacy than the injection of anisodamine alone for the treatment of intractable hiccups after stroke, with high safety and clinical promotion value.
Collapse
Affiliation(s)
- Jing Wang
- Department of Neurology, Affiliated Hospital of Hebei University of Engineering, Handan, 056002 Hebei, China
| | - Qinghua Zhu
- Department of Neurology, Affiliated Hospital of Hebei University of Engineering, Handan, 056002 Hebei, China
| | - Shuyan Zhang
- Department of Neurology, Affiliated Hospital of Hebei University of Engineering, Handan, 056002 Hebei, China
| | - Lisha Wen
- Department of Neurology, Affiliated Hospital of Hebei University of Engineering, Handan, 056002 Hebei, China
| | - Li Wang
- Department of Neurology, Affiliated Hospital of Hebei University of Engineering, Handan, 056002 Hebei, China
| |
Collapse
|
8
|
Fathi Y, Erfanian A. Decoding Bilateral Hindlimb Kinematics From Cat Spinal Signals Using Three-Dimensional Convolutional Neural Network. Front Neurosci 2022; 16:801818. [PMID: 35401098 PMCID: PMC8990134 DOI: 10.3389/fnins.2022.801818] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 03/02/2022] [Indexed: 11/13/2022] Open
Abstract
To date, decoding limb kinematic information mostly relies on neural signals recorded from the peripheral nerve, dorsal root ganglia (DRG), ventral roots, spinal cord gray matter, and the sensorimotor cortex. In the current study, we demonstrated that the neural signals recorded from the lateral and dorsal columns within the spinal cord have the potential to decode hindlimb kinematics during locomotion. Experiments were conducted using intact cats. The cats were trained to walk on a moving belt in a hindlimb-only condition, while their forelimbs were kept on the front body of the treadmill. The bilateral hindlimb joint angles were decoded using local field potential signals recorded using a microelectrode array implanted in the dorsal and lateral columns of both the left and right sides of the cat spinal cord. The results show that contralateral hindlimb kinematics can be decoded as accurately as ipsilateral kinematics. Interestingly, hindlimb kinematics of both legs can be accurately decoded from the lateral columns within one side of the spinal cord during hindlimb-only locomotion. The results indicated that there was no significant difference between the decoding performances obtained using neural signals recorded from the dorsal and lateral columns. The results of the time-frequency analysis show that event-related synchronization (ERS) and event-related desynchronization (ERD) patterns in all frequency bands could reveal the dynamics of the neural signals during movement. The onset and offset of the movement can be clearly identified by the ERD/ERS patterns. The results of the mutual information (MI) analysis showed that the theta frequency band contained significantly more limb kinematics information than the other frequency bands. Moreover, the theta power increased with a higher locomotion speed.
Collapse
Affiliation(s)
- Yaser Fathi
- Department of Biomedical Engineering, School of Electrical Engineering, Iran Neural Technology Research Centre, Iran University of Science and Technology, Tehran, Iran
| | - Abbas Erfanian
- Department of Biomedical Engineering, School of Electrical Engineering, Iran Neural Technology Research Centre, Iran University of Science and Technology, Tehran, Iran
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
- *Correspondence: Abbas Erfanian,
| |
Collapse
|