1
|
Ma L, Yu S, Xu X, Moses Amadi S, Zhang J, Wang Z. Application of artificial intelligence in 3D printing physical organ models. Mater Today Bio 2023; 23:100792. [PMID: 37746667 PMCID: PMC10511479 DOI: 10.1016/j.mtbio.2023.100792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 09/01/2023] [Accepted: 09/08/2023] [Indexed: 09/26/2023] Open
Abstract
Artificial intelligence (AI) and 3D printing will become technologies that profoundly impact humanity. 3D printing of patient-specific organ models is expected to replace animal carcasses, providing scenarios that simulate the surgical environment for preoperative training and educating patients to propose effective solutions. Due to the complexity of 3D printing manufacturing, it is still used on a small scale in clinical practice, and there are problems such as the low resolution of obtaining MRI/CT images, long consumption time, and insufficient realism. AI has been effectively used in 3D printing as a powerful problem-solving tool. This paper introduces 3D printed organ models, focusing on the idea of AI application in 3D printed manufacturing of organ models. Finally, the potential application of AI to 3D-printed organ models is discussed. Based on the synergy between AI and 3D printing that will benefit organ model manufacturing and facilitate clinical preoperative training in the medical field, the use of AI in 3D-printed organ model making is expected to become a reality.
Collapse
Affiliation(s)
- Liang Ma
- College of Materials Science and Engineering, Zhejiang University of Technology, Hangzhou, 310000, China
- Zhejiang Provincial People’s Hospital, Hangzhou, Zhejiang, 310000, China
| | - Shijie Yu
- College of Materials Science and Engineering, Zhejiang University of Technology, Hangzhou, 310000, China
- Zhejiang Provincial People’s Hospital, Hangzhou, Zhejiang, 310000, China
| | - Xiaodong Xu
- College of Materials Science and Engineering, Zhejiang University of Technology, Hangzhou, 310000, China
- Zhejiang Provincial People’s Hospital, Hangzhou, Zhejiang, 310000, China
| | - Sidney Moses Amadi
- International Education College, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, 310000, China
| | - Jing Zhang
- College of Materials Science and Engineering, Zhejiang University of Technology, Hangzhou, 310000, China
| | - Zhifei Wang
- Zhejiang Provincial People’s Hospital, Hangzhou, Zhejiang, 310000, China
| |
Collapse
|
2
|
Cho K, Kim J, Kim KD, Park S, Kim J, Yun J, Ahn Y, Oh SY, Lee SM, Seo JB, Kim N. MuSiC-ViT: A multi-task Siamese convolutional vision transformer for differentiating change from no-change in follow-up chest radiographs. Med Image Anal 2023; 89:102894. [PMID: 37562256 DOI: 10.1016/j.media.2023.102894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 06/29/2023] [Accepted: 07/07/2023] [Indexed: 08/12/2023]
Abstract
A major responsibility of radiologists in routine clinical practice is to read follow-up chest radiographs (CXRs) to identify changes in a patient's condition. Diagnosing meaningful changes in follow-up CXRs is challenging because radiologists must differentiate disease changes from natural or benign variations. Here, we suggest using a multi-task Siamese convolutional vision transformer (MuSiC-ViT) with an anatomy-matching module (AMM) to mimic the radiologist's cognitive process for differentiating baseline change from no-change. MuSiC-ViT uses the convolutional neural networks (CNNs) meet vision transformers model that combines CNN and transformer architecture. It has three major components: a Siamese network architecture, an AMM, and multi-task learning. Because the input is a pair of CXRs, a Siamese network was adopted for the encoder. The AMM is an attention module that focuses on related regions in the CXR pairs. To mimic a radiologist's cognitive process, MuSiC-ViT was trained using multi-task learning, normal/abnormal and change/no-change classification, and anatomy-matching. Among 406 K CXRs studied, 88 K change and 115 K no-change pairs were acquired for the training dataset. The internal validation dataset consisted of 1,620 pairs. To demonstrate the robustness of MuSiC-ViT, we verified the results with two other validation datasets. MuSiC-ViT respectively achieved accuracies and area under the receiver operating characteristic curves of 0.728 and 0.797 on the internal validation dataset, 0.614 and 0.784 on the first external validation dataset, and 0.745 and 0.858 on a second temporally separated validation dataset. All code is available at https://github.com/chokyungjin/MuSiC-ViT.
Collapse
Affiliation(s)
- Kyungjin Cho
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, College of Medicine, University of Ulsan, Seoul, Republic of Korea
| | - Jeeyoung Kim
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, College of Medicine, University of Ulsan, Seoul, Republic of Korea
| | - Ki Duk Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Seungju Park
- Department of Biomedical Engineering, College of Health Sciences, Korea University, Seoul, Republic of Korea
| | - Junsik Kim
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, College of Medicine, University of Ulsan, Seoul, Republic of Korea
| | - Jihye Yun
- Department of Radiology, Asan Medical Center/University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Yura Ahn
- Department of Radiology, and Research of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Sang Young Oh
- Department of Radiology, Asan Medical Center/University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Min Lee
- Department of Radiology, University of Ulsan College of Medicine and Asan Medical Center, Seoul, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology, Asan Medical Center/University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Convergence Medicine, Asan Medical Center/University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
3
|
Xie T, Wang Z, Li H, Wu P, Huang H, Zhang H, Alsaadi FE, Zeng N. Progressive attention integration-based multi-scale efficient network for medical imaging analysis with application to COVID-19 diagnosis. Comput Biol Med 2023; 159:106947. [PMID: 37099976 PMCID: PMC10116157 DOI: 10.1016/j.compbiomed.2023.106947] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 03/30/2023] [Accepted: 04/15/2023] [Indexed: 04/28/2023]
Abstract
In this paper, a novel deep learning-based medical imaging analysis framework is developed, which aims to deal with the insufficient feature learning caused by the imperfect property of imaging data. Named as multi-scale efficient network (MEN), the proposed method integrates different attention mechanisms to realize sufficient extraction of both detailed features and semantic information in a progressive learning manner. In particular, a fused-attention block is designed to extract fine-grained details from the input, where the squeeze-excitation (SE) attention mechanism is applied to make the model focus on potential lesion areas. A multi-scale low information loss (MSLIL)-attention block is proposed to compensate for potential global information loss and enhance the semantic correlations among features, where the efficient channel attention (ECA) mechanism is adopted. The proposed MEN is comprehensively evaluated on two COVID-19 diagnostic tasks, and the results show that as compared with some other advanced deep learning models, the proposed method is competitive in accurate COVID-19 recognition, which yields the best accuracy of 98.68% and 98.85%, respectively, and exhibits satisfactory generalization ability as well.
Collapse
Affiliation(s)
- Tingyi Xie
- School of Opto-electronic and Communication Engineering, Xiamen University of Technology, Xiamen 361024, China
| | - Zidong Wang
- Department of Computer Science, Brunel University London, Uxbridge UB8 3PH, UK.
| | - Han Li
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China
| | - Peishu Wu
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China
| | - Huixiang Huang
- School of Opto-electronic and Communication Engineering, Xiamen University of Technology, Xiamen 361024, China
| | - Hongyi Zhang
- School of Opto-electronic and Communication Engineering, Xiamen University of Technology, Xiamen 361024, China
| | - Fuad E Alsaadi
- Communication Systems and Networks Research Group, Department of Electrical and Computer Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Nianyin Zeng
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China.
| |
Collapse
|
4
|
Sun H, Ren G, Teng X, Song L, Li K, Yang J, Hu X, Zhan Y, Wan SBN, Wong MFE, Chan KK, Tsang HCH, Xu L, Wu TC, Kong FM(S, Wang YXJ, Qin J, Chan WCL, Ying M, Cai J. Artificial intelligence-assisted multistrategy image enhancement of chest X-rays for COVID-19 classification. Quant Imaging Med Surg 2023; 13:394-416. [PMID: 36620146 PMCID: PMC9816729 DOI: 10.21037/qims-22-610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 09/17/2022] [Indexed: 11/13/2022]
Abstract
Background The coronavirus disease 2019 (COVID-19) led to a dramatic increase in the number of cases of patients with pneumonia worldwide. In this study, we aimed to develop an AI-assisted multistrategy image enhancement technique for chest X-ray (CXR) images to improve the accuracy of COVID-19 classification. Methods Our new classification strategy consisted of 3 parts. First, the improved U-Net model with a variational encoder segmented the lung region in the CXR images processed by histogram equalization. Second, the residual net (ResNet) model with multidilated-rate convolution layers was used to suppress the bone signals in the 217 lung-only CXR images. A total of 80% of the available data were allocated for training and validation. The other 20% of the remaining data were used for testing. The enhanced CXR images containing only soft tissue information were obtained. Third, the neural network model with a residual cascade was used for the super-resolution reconstruction of low-resolution bone-suppressed CXR images. The training and testing data consisted of 1,200 and 100 CXR images, respectively. To evaluate the new strategy, improved visual geometry group (VGG)-16 and ResNet-18 models were used for the COVID-19 classification task of 2,767 CXR images. The accuracy of the multistrategy enhanced CXR images was verified through comparative experiments with various enhancement images. In terms of quantitative verification, 8-fold cross-validation was performed on the bone suppression model. In terms of evaluating the COVID-19 classification, the CXR images obtained by the improved method were used to train 2 classification models. Results Compared with other methods, the CXR images obtained based on the proposed model had better performance in the metrics of peak signal-to-noise ratio and root mean square error. The super-resolution CXR images of bone suppression obtained based on the neural network model were also anatomically close to the real CXR images. Compared with the initial CXR images, the classification accuracy rates of the internal and external testing data on the VGG-16 model increased by 5.09% and 12.81%, respectively, while the values increased by 3.51% and 18.20%, respectively, for the ResNet-18 model. The numerical results were better than those of the single-enhancement, double-enhancement, and no-enhancement CXR images. Conclusions The multistrategy enhanced CXR images can help to classify COVID-19 more accurately than the other existing methods.
Collapse
Affiliation(s)
- Hongfei Sun
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China;,School of Automation, Northwestern Polytechnical University, Xi’an, China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Xinzhi Teng
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Liming Song
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Kang Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi’an, China
| | - Xiaofei Hu
- Department of Radiology, Southwest Hospital, Third Military Medical University (Army Medical University), Chongqing, China
| | - Yuefu Zhan
- Department of Radiology, Hainan Women and Children’s Medical Center, Hainan, China
| | - Shiu Bun Nelson Wan
- Department of Radiology, Pamela Youde Nethersole Eastern Hospital, Hong Kong, China
| | - Man Fung Esther Wong
- Department of Radiology, Pamela Youde Nethersole Eastern Hospital, Hong Kong, China
| | - King Kwong Chan
- Department of Radiology and Imaging, Queen Elizabeth Hospital, Hong Kong, China
| | | | - Lu Xu
- Department of Radiology and Imaging, Queen Elizabeth Hospital, Hong Kong, China
| | - Tak Chiu Wu
- Department of Medicine, Queen Elizabeth Hospital, Hong Kong, China
| | | | - Yi Xiang J. Wang
- Deparment of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong, China
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Wing Chi Lawrence Chan
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Michael Ying
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
5
|
COVID-19 Classification from Chest X-Ray Images: A Framework of Deep Explainable Artificial Intelligence. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4254631. [PMID: 35845911 PMCID: PMC9284325 DOI: 10.1155/2022/4254631] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 06/06/2022] [Indexed: 11/25/2022]
Abstract
COVID-19 detection and classification using chest X-ray images is a current hot research topic based on the important application known as medical image analysis. To halt the spread of COVID-19, it is critical to identify the infection as soon as possible. Due to time constraints and the expertise of radiologists, manually diagnosing this infection from chest X-ray images is a difficult and time-consuming process. Artificial intelligence techniques have had a significant impact on medical image analysis and have also introduced several techniques for COVID-19 diagnosis. Deep learning and explainable AI have shown significant popularity among AL techniques for COVID-19 detection and classification. In this work, we propose a deep learning and explainable AI technique for the diagnosis and classification of COVID-19 using chest X-ray images. Initially, a hybrid contrast enhancement technique is proposed and applied to the original images that are later utilized for the training of two modified deep learning models. The deep transfer learning concept is selected for the training of pretrained modified models that are later employed for feature extraction. Features of both deep models are fused using improved canonical correlation analysis that is further optimized using a hybrid algorithm named Whale-Elephant Herding. Through this algorithm, the best features are selected and classified using an extreme learning machine (ELM). Moreover, the modified deep models are utilized for Grad-CAM visualization. The experimental process was conducted on three publicly available datasets and achieved accuracies of 99.1, 98.2, and 96.7%, respectively. Moreover, the ablation study was performed and showed that the proposed accuracy is better than the other methods.
Collapse
|
6
|
WMR-DepthwiseNet: A Wavelet Multi-Resolution Depthwise Separable Convolutional Neural Network for COVID-19 Diagnosis. Diagnostics (Basel) 2022; 12:diagnostics12030765. [PMID: 35328318 PMCID: PMC8947526 DOI: 10.3390/diagnostics12030765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 03/07/2022] [Accepted: 03/18/2022] [Indexed: 11/17/2022] Open
Abstract
Timely discovery of COVID-19 could aid in formulating a suitable treatment plan for disease mitigation and containment decisions. The widely used COVID-19 test necessitates a regular method and has a low sensitivity value. Computed tomography and chest X-ray are also other methods utilized by numerous studies for detecting COVID-19. In this article, we propose a CNN called depthwise separable convolution network with wavelet multiresolution analysis module (WMR-DepthwiseNet) that is robust to automatically learn details from both spatialwise and channelwise for COVID-19 identification with a limited radiograph dataset, which is critical due to the rapid growth of COVID-19. This model utilizes an effective strategy to prevent loss of spatial details, which is a prevalent issue in traditional convolutional neural network, and second, the depthwise separable connectivity framework ensures reusability of feature maps by directly connecting previous layer to all subsequent layers for extracting feature representations from few datasets. We evaluate the proposed model by utilizing a public domain dataset of COVID-19 confirmed case and other pneumonia illness. The proposed method achieves 98.63% accuracy, 98.46% sensitivity, 97.99% specificity, and 98.69% precision on chest X-ray dataset, whereas using the computed tomography dataset, the model achieves 96.83% accuracy, 97.78% sensitivity, 96.22% specificity, and 97.02% precision. According to the results of our experiments, our model achieves up-to-date accuracy with only a few training cases available, which is useful for COVID-19 screening. This latest paradigm is expected to contribute significantly in the battle against COVID-19 and other life-threatening diseases.
Collapse
|