1
|
Kim MJ, Kim SH, Kim SM, Nam JH, Hwang YB, Lim YJ. The Advent of Domain Adaptation into Artificial Intelligence for Gastrointestinal Endoscopy and Medical Imaging. Diagnostics (Basel) 2023; 13:3023. [PMID: 37835766 PMCID: PMC10572560 DOI: 10.3390/diagnostics13193023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 09/01/2023] [Accepted: 09/12/2023] [Indexed: 10/15/2023] Open
Abstract
Artificial intelligence (AI) is a subfield of computer science that aims to implement computer systems that perform tasks that generally require human learning, reasoning, and perceptual abilities. AI is widely used in the medical field. The interpretation of medical images requires considerable effort, time, and skill. AI-aided interpretations, such as automated abnormal lesion detection and image classification, are promising areas of AI. However, when images with different characteristics are extracted, depending on the manufacturer and imaging environment, a so-called domain shift problem occurs in which the developed AI has a poor versatility. Domain adaptation is used to address this problem. Domain adaptation is a tool that generates a newly converted image which is suitable for other domains. It has also shown promise in reducing the differences in appearance among the images collected from different devices. Domain adaptation is expected to improve the reading accuracy of AI for heterogeneous image distributions in gastrointestinal (GI) endoscopy and medical image analyses. In this paper, we review the history and basic characteristics of domain shift and domain adaptation. We also address their use in gastrointestinal endoscopy and the medical field more generally through published examples, perspectives, and future directions.
Collapse
Affiliation(s)
- Min Ji Kim
- Division of Gastroenterology, Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea; (M.J.K.); (S.H.K.); (J.H.N.)
| | - Sang Hoon Kim
- Division of Gastroenterology, Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea; (M.J.K.); (S.H.K.); (J.H.N.)
| | - Suk Min Kim
- Department of Intelligent Systems and Robotics, College of Electrical & Computer Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea; (S.M.K.); (Y.B.H.)
| | - Ji Hyung Nam
- Division of Gastroenterology, Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea; (M.J.K.); (S.H.K.); (J.H.N.)
| | - Young Bae Hwang
- Department of Intelligent Systems and Robotics, College of Electrical & Computer Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea; (S.M.K.); (Y.B.H.)
| | - Yun Jeong Lim
- Division of Gastroenterology, Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang 10326, Republic of Korea; (M.J.K.); (S.H.K.); (J.H.N.)
| |
Collapse
|
2
|
Xu Z, Dai Y, Liu F, Chen W, Liu Y, Shi L, Liu S, Zhou Y. Swin MAE: Masked autoencoders for small datasets. Comput Biol Med 2023; 161:107037. [PMID: 37230020 DOI: 10.1016/j.compbiomed.2023.107037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 04/23/2023] [Accepted: 05/11/2023] [Indexed: 05/27/2023]
Abstract
The development of deep learning models in medical image analysis is majorly limited by the lack of large-sized and well-annotated datasets. Unsupervised learning does not require labels and is more suitable for solving medical image analysis problems. However, most unsupervised learning methods must be applied to large datasets. To make unsupervised learning applicable to small datasets, we proposed Swin MAE, a masked autoencoder with Swin Transformer as its backbone. Even on a dataset of only a few thousand medical images, Swin MAE can still learn useful semantic features purely from images without using any pre-trained models. It can equal or even slightly outperform the supervised model obtained by Swin Transformer trained on ImageNet in the transfer learning results of downstream tasks. Compared to MAE, Swin MAE brought a performance improvement of twice and five times for downstream tasks on BTCV and our parotid dataset, respectively. The code is publicly available at https://github.com/Zian-Xu/Swin-MAE.
Collapse
Affiliation(s)
- Zi'an Xu
- Northeastern University, Shenyang, China
| | - Yin Dai
- Northeastern University, Shenyang, China.
| | - Fayu Liu
- China Medical University, Shenyang, China
| | | | - Yue Liu
- Northeastern University, Shenyang, China
| | - Lifu Shi
- Liaoning Jiayin Medical Technology Co., China
| | - Sheng Liu
- China Medical University, Shenyang, China
| | | |
Collapse
|
3
|
Hasan MM, Islam MU, Sadeq MJ, Fung WK, Uddin J. Review on the Evaluation and Development of Artificial Intelligence for COVID-19 Containment. SENSORS (BASEL, SWITZERLAND) 2023; 23:527. [PMID: 36617124 PMCID: PMC9824505 DOI: 10.3390/s23010527] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 12/23/2022] [Accepted: 12/29/2022] [Indexed: 06/17/2023]
Abstract
Artificial intelligence has significantly enhanced the research paradigm and spectrum with a substantiated promise of continuous applicability in the real world domain. Artificial intelligence, the driving force of the current technological revolution, has been used in many frontiers, including education, security, gaming, finance, robotics, autonomous systems, entertainment, and most importantly the healthcare sector. With the rise of the COVID-19 pandemic, several prediction and detection methods using artificial intelligence have been employed to understand, forecast, handle, and curtail the ensuing threats. In this study, the most recent related publications, methodologies and medical reports were investigated with the purpose of studying artificial intelligence's role in the pandemic. This study presents a comprehensive review of artificial intelligence with specific attention to machine learning, deep learning, image processing, object detection, image segmentation, and few-shot learning studies that were utilized in several tasks related to COVID-19. In particular, genetic analysis, medical image analysis, clinical data analysis, sound analysis, biomedical data classification, socio-demographic data analysis, anomaly detection, health monitoring, personal protective equipment (PPE) observation, social control, and COVID-19 patients' mortality risk approaches were used in this study to forecast the threatening factors of COVID-19. This study demonstrates that artificial-intelligence-based algorithms integrated into Internet of Things wearable devices were quite effective and efficient in COVID-19 detection and forecasting insights which were actionable through wide usage. The results produced by the study prove that artificial intelligence is a promising arena of research that can be applied for disease prognosis, disease forecasting, drug discovery, and to the development of the healthcare sector on a global scale. We prove that artificial intelligence indeed played a significantly important role in helping to fight against COVID-19, and the insightful knowledge provided here could be extremely beneficial for practitioners and research experts in the healthcare domain to implement the artificial-intelligence-based systems in curbing the next pandemic or healthcare disaster.
Collapse
Affiliation(s)
- Md. Mahadi Hasan
- Department of Computer Science and Engineering, Asian University of Bangladesh, Ashulia 1349, Bangladesh
| | - Muhammad Usama Islam
- School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette, LA 70504, USA
| | - Muhammad Jafar Sadeq
- Department of Computer Science and Engineering, Asian University of Bangladesh, Ashulia 1349, Bangladesh
| | - Wai-Keung Fung
- Department of Applied Computing and Engineering, Cardiff School of Technologies, Cardiff Metropolitan University, Cardiff CF5 2YB, UK
| | - Jasim Uddin
- Department of Applied Computing and Engineering, Cardiff School of Technologies, Cardiff Metropolitan University, Cardiff CF5 2YB, UK
| |
Collapse
|
4
|
Han K, Wang J, Zou Y, Zhang Y, Zhou L, Yin Y. Association between emphysema and other pulmonary computed tomography patterns in COVID-19 pneumonia. J Med Virol 2023; 95:e28293. [PMID: 36358023 PMCID: PMC9828029 DOI: 10.1002/jmv.28293] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 10/22/2022] [Accepted: 11/07/2022] [Indexed: 11/13/2022]
Abstract
To evaluate the chest computed tomography (CT) findings of patients with Corona Virus Disease 2019 (COVID-19) on admission to hospital. And then correlate CT pulmonary infiltrates involvement with the findings of emphysema. We analyzed the different infiltrates of COVID-19 pneumonia using emphysema as the grade of pneumonia. We applied open-source assisted software (3D Slicer) to model the lungs and lesions of 66 patients with COVID-19, which were retrospectively included. we divided the 66 COVID-19 patients into the following two groups: (A) 12 patients with less than 10% emphysema in the low-attenuation area less than -950 Hounsfield units (%LAA-950), (B) 54 patients with greater than or equal to 10% emphysema in %LAA-950. Imaging findings were assessed retrospectively by two authors and then pulmonary infiltrates and emphysema volumes were measured on CT using 3D Slicer software. Differences between pulmonary infiltrates, emphysema, Collapsed, affected of patients with CT findings were assessed by Kruskal-Wallis and Wilcoxon test, respectively. Statistical significance was set at p < 0.05. The left lung (A) affected left lung 20.00/affected right lung 18.50, (B) affected left lung 13.00/affected right lung 11.50 was most frequently involved region in COVID-19. In addition, collapsed left lung, (A) collapsed left lung 4.95/collapsed right lung 4.65, (B) collapsed left lung 3.65/collapsed right lung 3.15 was also more severe than the right one. There were significant differences between the Group A and Group B in terms of the percentage of CT involvement in each lung region (p < 0.05), except for the inflated affected total lung (p = 0.152). The median percentage of collapsed left lung in the Group A was 20.00 (14.00-30.00), right lung was 18.50 (13.00-30.25) and the total was 19.00 (13.00-30.00), while the median percentage of collapsed left lung in the Group B was 13.00 (10.00-14.75), right lung was 11.50 (10.00-15.00) and the total was 12.50 (10.00-15.00). The percentage of affected left lung is an independent predictor of emphysema in COVID-19 patients. We need to focus on the left lung of the patient as it is more affected. The people with lower levels of emphysema may have more collapsed segments. The more collapsed segments may lead to more serious clinical feature.
Collapse
Affiliation(s)
- Ke Han
- Department of Cardiothoracic Vascular Surgery, Renmin HospitalHubei University of MedicineShiyanHubeiP. R. China
| | - Jing Wang
- Department of Dermatology, Renmin HospitalHubei University of MedicineShiyanHubeiP. R. China
| | - Yulin Zou
- Department of Dermatology, Renmin HospitalHubei University of MedicineShiyanHubeiP. R. China,Department of Dermatology, Jinzhou Medical University Graduate Training Base, Renmin HospitalHubei University of MedicineShiyanHubeiP. R. China
| | - Yuxin Zhang
- Department of Dermatology, Renmin HospitalHubei University of MedicineShiyanHubeiP. R. China
| | - Lin Zhou
- Department of Medical Imaging Center, Renmin HospitalHubei University of MedicineShiyanHubeiP. R. China
| | - Yiping Yin
- Department of Pulmonary & Critical Care Medicine, Renmin HospitalHubei University of MedicineShiyanHubeiP. R. China
| |
Collapse
|
5
|
Sherwani MK, Marzullo A, De Momi E, Calimeri F. Lesion segmentation in lung CT scans using unsupervised adversarial learning. Med Biol Eng Comput 2022; 60:3203-3215. [PMID: 36125656 PMCID: PMC9486778 DOI: 10.1007/s11517-022-02651-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 07/28/2022] [Indexed: 12/01/2022]
Abstract
Lesion segmentation in medical images is difficult yet crucial for proper diagnosis and treatment. Identifying lesions in medical images is costly and time-consuming and requires highly specialized knowledge. For this reason, supervised and semi-supervised learning techniques have been developed. Nevertheless, the lack of annotated data, which is common in medical imaging, is an issue; in this context, interesting approaches can use unsupervised learning to accurately distinguish between healthy tissues and lesions, training the network without using the annotations. In this work, an unsupervised learning technique is proposed to automatically segment coronavirus disease 2019 (COVID-19) lesions on 2D axial CT lung slices. The proposed approach uses the technique of image translation to generate healthy lung images based on the infected lung image without the need for lesion annotations. Attention masks are used to improve the quality of the segmentation further. Experiments showed the capability of the proposed approaches to segment the lesions, and it outperforms a range of unsupervised lesion detection approaches. The average reported results for the test dataset based on the metrics: Dice Score, Sensitivity, Specificity, Structure Measure, Enhanced-Alignment Measure, and Mean Absolute Error are 0.695, 0.694, 0.961, 0.791, 0.875, and 0.082 respectively. The achieved results are promising compared with the state-of-the-art and could constitute a valuable tool for future developments.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Department of Mathematics and Computer Science, University of Calabria, Rende, Italy.
| | - Aldo Marzullo
- Department of Mathematics and Computer Science, University of Calabria, Rende, Italy
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Milan, Italy
| | - Francesco Calimeri
- Department of Mathematics and Computer Science, University of Calabria, Rende, Italy
| |
Collapse
|
6
|
Liu S, Cai T, Tang X, Zhang Y, Wang C. COVID-19 diagnosis via chest X-ray image classification based on multiscale class residual attention. Comput Biol Med 2022; 149:106065. [PMID: 36081225 PMCID: PMC9433340 DOI: 10.1016/j.compbiomed.2022.106065] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Revised: 08/07/2022] [Accepted: 08/27/2022] [Indexed: 12/11/2022]
Abstract
Aiming at detecting COVID-19 effectively, a multiscale class residual attention (MCRA) network is proposed via chest X-ray (CXR) image classification. First, to overcome the data shortage and improve the robustness of our network, a pixel-level image mixing of local regions was introduced to achieve data augmentation and reduce noise. Secondly, multi-scale fusion strategy was adopted to extract global contextual information at different scales and enhance semantic representation. Last but not least, class residual attention was employed to generate spatial attention for each class, which can avoid inter-class interference and enhance related features to further improve the COVID-19 detection. Experimental results show that our network achieves superior diagnostic performance on COVIDx dataset, and its accuracy, PPV, sensitivity, specificity and F1-score are 97.71%, 96.76%, 96.56%, 98.96% and 96.64%, respectively; moreover, the heat maps can endow our deep model with somewhat interpretability.
Collapse
Affiliation(s)
- Shangwang Liu
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, 453007, China; Engineering Lab of Intelligence Business & Internet of Things, Henan Province, China.
| | - Tongbo Cai
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, 453007, China; Engineering Lab of Intelligence Business & Internet of Things, Henan Province, China
| | - Xiufang Tang
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, 453007, China; Engineering Lab of Intelligence Business & Internet of Things, Henan Province, China
| | - Yangyang Zhang
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, 453007, China; Engineering Lab of Intelligence Business & Internet of Things, Henan Province, China
| | - Changgeng Wang
- College of Computer and Information Engineering, Henan Normal University, Xinxiang, 453007, China; Engineering Lab of Intelligence Business & Internet of Things, Henan Province, China
| |
Collapse
|
7
|
Hassan H, Ren Z, Zhou C, Khan MA, Pan Y, Zhao J, Huang B. Supervised and weakly supervised deep learning models for COVID-19 CT diagnosis: A systematic review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 218:106731. [PMID: 35286874 PMCID: PMC8897838 DOI: 10.1016/j.cmpb.2022.106731] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 01/28/2022] [Accepted: 03/03/2022] [Indexed: 05/05/2023]
Abstract
Artificial intelligence (AI) and computer vision (CV) methods become reliable to extract features from radiological images, aiding COVID-19 diagnosis ahead of the pathogenic tests and saving critical time for disease management and control. Thus, this review article focuses on cascading numerous deep learning-based COVID-19 computerized tomography (CT) imaging diagnosis research, providing a baseline for future research. Compared to previous review articles on the topic, this study pigeon-holes the collected literature very differently (i.e., its multi-level arrangement). For this purpose, 71 relevant studies were found using a variety of trustworthy databases and search engines, including Google Scholar, IEEE Xplore, Web of Science, PubMed, Science Direct, and Scopus. We classify the selected literature in multi-level machine learning groups, such as supervised and weakly supervised learning. Our review article reveals that weak supervision has been adopted extensively for COVID-19 CT diagnosis compared to supervised learning. Weakly supervised (conventional transfer learning) techniques can be utilized effectively for real-time clinical practices by reusing the sophisticated features rather than over-parameterizing the standard models. Few-shot and self-supervised learning are the recent trends to address data scarcity and model efficacy. The deep learning (artificial intelligence) based models are mainly utilized for disease management and control. Therefore, it is more appropriate for readers to comprehend the related perceptive of deep learning approaches for the in-progress COVID-19 CT diagnosis research.
Collapse
Affiliation(s)
- Haseeb Hassan
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China; Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen, China; College of Applied Sciences, Shenzhen University, Shenzhen, 518060, China
| | - Zhaoyu Ren
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China
| | - Chengmin Zhou
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China
| | - Muazzam A Khan
- Department of Computer Sciences, Quaid-i-Azam University, Islamabad, Pakistan
| | - Yi Pan
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China
| | - Jian Zhao
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China.
| | - Bingding Huang
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China.
| |
Collapse
|