1
|
Wu J, Pan Y, Ye Q, Zhou J, Gou F. Intelligent cell images segmentation system: based on SDN and moving transformer. Sci Rep 2024; 14:24834. [PMID: 39438641 PMCID: PMC11496840 DOI: 10.1038/s41598-024-76577-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2024] [Accepted: 10/15/2024] [Indexed: 10/25/2024] Open
Abstract
Diagnosing diseases heavily relies on cell pathology images, but the extensive data in each manual identification of relevant cells labor-intensive, especially in regions with a scarcity of qualified healthcare professionals. This study aims to develop an intelligent system to enhance the diagnostic accuracy of cytopathology images by addressing image noise and segmentation issues, thereby improving the efficiency of medical professionals in disease diagnosis. We introduced an innovative system combining a self-supervised algorithm, SDN, for image denoising with data enhancement and image segmentation using the UPerMVit model. The UPerMVit model's novel attention mechanisms and modular architecture provide higher accuracy and lower computational complexity than traditional methods. The proposed system effectively reduces image noise and accurately segments annotated images, highlighting cellular structures relevant to medical staff. This enhances diagnostic accuracy and aids in the accurate identification of pathological cells. Our intelligent system offers a reliable tool for medical professionals, improving diagnostic efficiency and accuracy in cytopathologic image analysis. It provides significant technical support in regions lacking adequate medical expertise.
Collapse
Affiliation(s)
- Jia Wu
- School of Computer Science and Technology, Jiangxi University of Chinese Medicine, Nanchang, 330004, Jiangxi, China
- Research Center for Artificial Intelligence, Monash University, Clayton, Melbourne, VIC, 3800, Australia
| | - Yao Pan
- School of Computer Science and Technology, Jiangxi University of Chinese Medicine, Nanchang, 330004, Jiangxi, China
| | - Qing Ye
- School of Computer Science and Technology, Jiangxi University of Chinese Medicine, Nanchang, 330004, Jiangxi, China.
- Jiangxi Provincial Key Laboratory of Chinese Medicine Artificial Intelligence, Nanchang, 330004, Jiangxi, China.
| | - Jing Zhou
- Hunan University of Medicine General Hospital, Huaihua, China.
| | - Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China.
| |
Collapse
|
2
|
Lee CH, Pan CT, Lee MC, Wang CH, Chang CY, Shiue YL. RDAG U-Net: An Advanced AI Model for Efficient and Accurate CT Scan Analysis of SARS-CoV-2 Pneumonia Lesions. Diagnostics (Basel) 2024; 14:2099. [PMID: 39335778 PMCID: PMC11431783 DOI: 10.3390/diagnostics14182099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2024] [Revised: 09/07/2024] [Accepted: 09/18/2024] [Indexed: 09/30/2024] Open
Abstract
Background/Objective: This study aims to utilize advanced artificial intelligence (AI) image recog-nition technologies to establish a robust system for identifying features in lung computed tomog-raphy (CT) scans, thereby detecting respiratory infections such as SARS-CoV-2 pneumonia. Spe-cifically, the research focuses on developing a new model called Residual-Dense-Attention Gates U-Net (RDAG U-Net) to improve accuracy and efficiency in identification. Methods: This study employed Attention U-Net, Attention Res U-Net, and the newly developed RDAG U-Net model. RDAG U-Net extends the U-Net architecture by incorporating ResBlock and DenseBlock modules in the encoder to retain training parameters and reduce computation time. The training dataset in-cludes 3,520 CT scans from an open database, augmented to 10,560 samples through data en-hancement techniques. The research also focused on optimizing convolutional architectures, image preprocessing, interpolation methods, data management, and extensive fine-tuning of training parameters and neural network modules. Result: The RDAG U-Net model achieved an outstanding accuracy of 93.29% in identifying pulmonary lesions, with a 45% reduction in computation time compared to other models. The study demonstrated that RDAG U-Net performed stably during training and exhibited good generalization capability by evaluating loss values, model-predicted lesion annotations, and validation-epoch curves. Furthermore, using ITK-Snap to convert 2D pre-dictions into 3D lung and lesion segmentation models, the results delineated lesion contours, en-hancing interpretability. Conclusion: The RDAG U-Net model showed significant improvements in accuracy and efficiency in the analysis of CT images for SARS-CoV-2 pneumonia, achieving a 93.29% recognition accuracy and reducing computation time by 45% compared to other models. These results indicate the potential of the RDAG U-Net model in clinical applications, as it can accelerate the detection of pulmonary lesions and effectively enhance diagnostic accuracy. Additionally, the 2D and 3D visualization results allow physicians to understand lesions' morphology and distribution better, strengthening decision support capabilities and providing valuable medical diagnosis and treatment planning tools.
Collapse
Affiliation(s)
- Chih-Hui Lee
- Institute of Biomedical Sciences, National Sun Yat-sen University, Kaohsiung 804, Taiwan
| | - Cheng-Tang Pan
- Department of Mechanical and Electro-Mechanical Engineering, National Sun Yat-sen University, Kaohsiung 804, Taiwan
- Institute of Advanced Semiconductor Packaging and Testing, College of Semiconductor and Advanced Technology Research, National Sun Yat-sen University, Kaohsiung 804, Taiwan
- Institute of Precision Medicine, National Sun Yat-sen University, Kaohsiung 804, Taiwan
- Taiwan Instrument Research Institute, National Applied Research Laboratories, Hsinchu City 300, Taiwan
| | - Ming-Chan Lee
- Department of Electrical Engineering, National Kaohsiung University of Science and Technology, Kaohsiung 807, Taiwan
| | - Chih-Hsuan Wang
- Nephrology and Metabolism Division, Department of Internal Medicine, Kaohsiung Armed Forces General Hospital, Kaohsiung 802, Taiwan
- Institute of Medical Science and Technology, National Sun Yat-sen University, Kaohsiung 804, Taiwan
| | - Chun-Yung Chang
- Nephrology and Metabolism Division, Department of Internal Medicine, Kaohsiung Armed Forces General Hospital, Kaohsiung 802, Taiwan
- Institute of Medical Science and Technology, National Sun Yat-sen University, Kaohsiung 804, Taiwan
| | - Yow-Ling Shiue
- Institute of Biomedical Sciences, National Sun Yat-sen University, Kaohsiung 804, Taiwan
- Institute of Precision Medicine, National Sun Yat-sen University, Kaohsiung 804, Taiwan
| |
Collapse
|
3
|
Gou F, Liu J, Xiao C, Wu J. Research on Artificial-Intelligence-Assisted Medicine: A Survey on Medical Artificial Intelligence. Diagnostics (Basel) 2024; 14:1472. [PMID: 39061610 PMCID: PMC11275417 DOI: 10.3390/diagnostics14141472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 07/04/2024] [Accepted: 07/05/2024] [Indexed: 07/28/2024] Open
Abstract
With the improvement of economic conditions and the increase in living standards, people's attention in regard to health is also continuously increasing. They are beginning to place their hopes on machines, expecting artificial intelligence (AI) to provide a more humanized medical environment and personalized services, thus greatly expanding the supply and bridging the gap between resource supply and demand. With the development of IoT technology, the arrival of the 5G and 6G communication era, and the enhancement of computing capabilities in particular, the development and application of AI-assisted healthcare have been further promoted. Currently, research on and the application of artificial intelligence in the field of medical assistance are continuously deepening and expanding. AI holds immense economic value and has many potential applications in regard to medical institutions, patients, and healthcare professionals. It has the ability to enhance medical efficiency, reduce healthcare costs, improve the quality of healthcare services, and provide a more intelligent and humanized service experience for healthcare professionals and patients. This study elaborates on AI development history and development timelines in the medical field, types of AI technologies in healthcare informatics, the application of AI in the medical field, and opportunities and challenges of AI in the field of medicine. The combination of healthcare and artificial intelligence has a profound impact on human life, improving human health levels and quality of life and changing human lifestyles.
Collapse
Affiliation(s)
- Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Jun Liu
- The Second People's Hospital of Huaihua, Huaihua 418000, China
| | - Chunwen Xiao
- The Second People's Hospital of Huaihua, Huaihua 418000, China
| | - Jia Wu
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton, VIC 3800, Australia
| |
Collapse
|
4
|
Zhan X, Long H, Gou F, Wu J. A semantic fidelity interpretable-assisted decision model for lung nodule classification. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-03043-5. [PMID: 38141069 DOI: 10.1007/s11548-023-03043-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 11/24/2023] [Indexed: 12/24/2023]
Abstract
PURPOSE Early diagnosis of lung nodules is important for the treatment of lung cancer patients, existing capsule network-based assisted diagnostic models for lung nodule classification have shown promising prospects in terms of interpretability. However, these models lack the ability to draw features robustly at shallow networks, which in turn limits the performance of the models. Therefore, we propose a semantic fidelity capsule encoding and interpretable (SFCEI)-assisted decision model for lung nodule multi-class classification. METHODS First, we propose multilevel receptive field feature encoding block to capture multi-scale features of lung nodules of different sizes. Second, we embed multilevel receptive field feature encoding blocks in the residual code-and-decode attention layer to extract fine-grained context features. Integrating multi-scale features and contextual features to form semantic fidelity lung nodule attribute capsule representations, which consequently enhances the performance of the model. RESULTS We implemented comprehensive experiments on the dataset (LIDC-IDRI) to validate the superiority of the model. The stratified fivefold cross-validation results show that the accuracy (94.17%) of our method exceeds existing advanced approaches in the multi-class classification of malignancy scores for lung nodules. CONCLUSION The experiments confirm that the methodology proposed can effectively capture the multi-scale features and contextual features of lung nodules. It enhances the capability of shallow structure drawing features in capsule networks, which in turn improves the classification performance of malignancy scores. The interpretable model can support the physicians' confidence in clinical decision-making.
Collapse
Affiliation(s)
- Xiangbing Zhan
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China
| | - Huiyun Long
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China.
| | - Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China.
| | - Jia Wu
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China.
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton, VIC, 3800, Australia.
| |
Collapse
|
5
|
He Z, Liu J, Gou F, Wu J. An Innovative Solution Based on TSCA-ViT for Osteosarcoma Diagnosis in Resource-Limited Settings. Biomedicines 2023; 11:2740. [PMID: 37893113 PMCID: PMC10604772 DOI: 10.3390/biomedicines11102740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 09/24/2023] [Accepted: 10/08/2023] [Indexed: 10/29/2023] Open
Abstract
Identifying and managing osteosarcoma pose significant challenges, especially in resource-constrained developing nations. Advanced diagnostic methods involve isolating the nucleus from cancer cells for comprehensive analysis. However, two main challenges persist: mitigating image noise during the capture and transmission of cellular sections, and providing an efficient, accurate, and cost-effective solution for cell nucleus segmentation. To tackle these issues, we introduce the Twin-Self and Cross-Attention Vision Transformer (TSCA-ViT). This pioneering AI-based system employs a directed filtering algorithm for noise reduction and features an innovative transformer architecture with a twin attention mechanism for effective segmentation. The model also incorporates cross-attention-enabled skip connections to augment spatial information. We evaluated our method on a dataset of 1000 osteosarcoma pathology slide images from the Second People's Hospital of Huaihua, achieving a remarkable average precision of 97.7%. This performance surpasses traditional methodologies. Furthermore, TSCA-ViT offers enhanced computational efficiency owing to its fewer parameters, which results in reduced time and equipment costs. These findings underscore the superior efficacy and efficiency of TSCA-ViT, offering a promising approach for addressing the ongoing challenges in osteosarcoma diagnosis and treatment, particularly in settings with limited resources.
Collapse
Affiliation(s)
- Zengxiao He
- School of Computer Science and Engineering, Central South University, Changsha 410083, China;
| | - Jun Liu
- The Second People’s Hospital of Huaihua, Huaihua 418000, China
| | - Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China;
| | - Jia Wu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China;
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China;
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton, VIC 3800, Australia
| |
Collapse
|
6
|
Rich JM, Bhardwaj LN, Shah A, Gangal K, Rapaka MS, Oberai AA, Fields BKK, Matcuk GR, Duddalwar VA. Deep learning image segmentation approaches for malignant bone lesions: a systematic review and meta-analysis. FRONTIERS IN RADIOLOGY 2023; 3:1241651. [PMID: 37614529 PMCID: PMC10442705 DOI: 10.3389/fradi.2023.1241651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 07/28/2023] [Indexed: 08/25/2023]
Abstract
Introduction Image segmentation is an important process for quantifying characteristics of malignant bone lesions, but this task is challenging and laborious for radiologists. Deep learning has shown promise in automating image segmentation in radiology, including for malignant bone lesions. The purpose of this review is to investigate deep learning-based image segmentation methods for malignant bone lesions on Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron-Emission Tomography/CT (PET/CT). Method The literature search of deep learning-based image segmentation of malignant bony lesions on CT and MRI was conducted in PubMed, Embase, Web of Science, and Scopus electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 41 original articles published between February 2017 and March 2023 were included in the review. Results The majority of papers studied MRI, followed by CT, PET/CT, and PET/MRI. There was relatively even distribution of papers studying primary vs. secondary malignancies, as well as utilizing 3-dimensional vs. 2-dimensional data. Many papers utilize custom built models as a modification or variation of U-Net. The most common metric for evaluation was the dice similarity coefficient (DSC). Most models achieved a DSC above 0.6, with medians for all imaging modalities between 0.85-0.9. Discussion Deep learning methods show promising ability to segment malignant osseous lesions on CT, MRI, and PET/CT. Some strategies which are commonly applied to help improve performance include data augmentation, utilization of large public datasets, preprocessing including denoising and cropping, and U-Net architecture modification. Future directions include overcoming dataset and annotation homogeneity and generalizing for clinical applicability.
Collapse
Affiliation(s)
- Joseph M. Rich
- Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Lokesh N. Bhardwaj
- Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Aman Shah
- Department of Applied Biostatistics and Epidemiology, University of Southern California, Los Angeles, CA, United States
| | - Krish Gangal
- Bridge UnderGrad Science Summer Research Program, Irvington High School, Fremont, CA, United States
| | - Mohitha S. Rapaka
- Department of Biology, University of Texas at Austin, Austin, TX, United States
| | - Assad A. Oberai
- Department of Aerospace and Mechanical Engineering Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
| | - Brandon K. K. Fields
- Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - George R. Matcuk
- Department of Radiology, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Vinay A. Duddalwar
- Department of Radiology, Keck School of Medicine of the University of Southern California, Los Angeles, CA, United States
- Department of Radiology, USC Radiomics Laboratory, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|