1
|
Muksimova S, Umirzakova S, Kang S, Cho YI. CerviLearnNet: Advancing cervical cancer diagnosis with reinforcement learning-enhanced convolutional networks. Heliyon 2024; 10:e29913. [PMID: 38694035 PMCID: PMC11061669 DOI: 10.1016/j.heliyon.2024.e29913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2023] [Revised: 04/16/2024] [Accepted: 04/17/2024] [Indexed: 05/03/2024] Open
Abstract
Women tend to face many problems throughout their lives; cervical cancer is one of the most dangerous diseases that they can face, and it has many negative consequences. Regular screening and treatment of precancerous lesions play a vital role in the fight against cervical cancer. It is becoming increasingly common in medical practice to predict the early stages of serious illnesses, such as heart attacks, kidney failure, and cancer, using machine learning-based techniques. To overcome these obstacles, we propose the use of auxiliary modules and a special residual block, to record contextual interactions between object classes and to support the object reference strategy. Unlike the latest state-of-the-art classification method, we create a new architecture called the Reinforcement Learning Cancer Network, "RL-CancerNet", which diagnoses cervical cancer with incredible accuracy. We trained and tested our method on two well-known publicly available datasets, SipaKMeD and Herlev, to assess it and enable comparisons with earlier methods. Cervical cancer images were labeled in this dataset; therefore, they had to be marked manually. Our study shows that, compared to previous approaches for the assignment of classifying cervical cancer as an early cellular change, the proposed approach generates a more reliable and stable image derived from images of datasets of vastly different sizes, indicating that it will be effective for other datasets.
Collapse
Affiliation(s)
- Shakhnoza Muksimova
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, South Korea
| | - Sabina Umirzakova
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, South Korea
| | - Seokwhan Kang
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, South Korea
| | - Young Im Cho
- Department of Computer Engineering, Gachon University, Sujeong-gu, Seongnam-si 461-701, Gyeonggi-do, South Korea
| |
Collapse
|
2
|
Christou CD, Tsoulfas G. Challenges involved in the application of artificial intelligence in gastroenterology: The race is on! World J Gastroenterol 2023; 29:6168-6178. [PMID: 38186861 PMCID: PMC10768398 DOI: 10.3748/wjg.v29.i48.6168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 11/06/2023] [Accepted: 12/18/2023] [Indexed: 12/27/2023] Open
Abstract
Gastroenterology is a particularly data-rich field, generating vast repositories of data that are a fruitful ground for artificial intelligence (AI) and machine learning (ML) applications. In this opinion review, we initially elaborate on the current status of the application of AI/ML-based software in gastroenterology. Currently, AI/ML-based models have been developed in the following applications: Models integrated into the clinical setting following real-time patient data flagging patients at high risk for developing a gastrointestinal disease, models employing non-invasive parameters that provide accurate diagnoses aiming to either replace, minimize, or refine the indications of endoscopy, models utilizing genomic data to diagnose various gastrointestinal diseases, computer-aided diagnosis systems facilitating the interpretation of endoscopy images, models to facilitate treatment allocation and predict the response to treatment, and finally, models in prognosis predicting complications, recurrence following treatment, and overall survival. Then, we elaborate on several challenges and how they may negatively impact the widespread application of AI in healthcare and gastroenterology. Specifically, we elaborate on concerns regarding accuracy, cost-effectiveness, cybersecurity, interpretability, oversight, and liability. While AI is unlikely to replace physicians, it will transform the skillset demanded by future physicians to practice. Thus, physicians are expected to engage with AI to avoid becoming obsolete.
Collapse
Affiliation(s)
- Chrysanthos D Christou
- Department of Transplantation Surgery, Hippokration General Hospital, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki 54622, Greece
- Center for Research and Innovation in Solid Organ Transplantation, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki 54622, Greece
| | - Georgios Tsoulfas
- Department of Transplantation Surgery, Hippokration General Hospital, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki 54622, Greece
- Center for Research and Innovation in Solid Organ Transplantation, School of Medicine, Aristotle University of Thessaloniki, Thessaloniki 54622, Greece
| |
Collapse
|
3
|
Simović A, Lutovac-Banduka M, Lekić S, Kuleto V. Smart Visualization of Medical Images as a Tool in the Function of Education in Neuroradiology. Diagnostics (Basel) 2022; 12:diagnostics12123208. [PMID: 36553215 PMCID: PMC9777748 DOI: 10.3390/diagnostics12123208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/09/2022] [Accepted: 12/12/2022] [Indexed: 12/23/2022] Open
Abstract
The smart visualization of medical images (SVMI) model is based on multi-detector computed tomography (MDCT) data sets and can provide a clearer view of changes in the brain, such as tumors (expansive changes), bleeding, and ischemia on native imaging (i.e., a non-contrast MDCT scan). The new SVMI method provides a more precise representation of the brain image by hiding pixels that are not carrying information and rescaling and coloring the range of pixels essential for detecting and visualizing the disease. In addition, SVMI can be used to avoid the additional exposure of patients to ionizing radiation, which can lead to the occurrence of allergic reactions due to the contrast media administration. Results of the SVMI model were compared with the final diagnosis of the disease after additional diagnostics and confirmation by neuroradiologists, who are highly trained physicians with many years of experience. The application of the realized and presented SVMI model can optimize the engagement of material, medical, and human resources and has the potential for general application in medical training, education, and clinical research.
Collapse
Affiliation(s)
- Aleksandar Simović
- Department of Information Technology, Information Technology School ITS, 11000 Belgrade, Serbia
- Correspondence:
| | - Maja Lutovac-Banduka
- Department of RT-RK Institute, RT-RK for Computer Based Systems, 21000 Novi Sad, Serbia
| | - Snežana Lekić
- Department of Emergency Neuroradiology, University Clinical Centre of Serbia UKCS, 11000 Belgrade, Serbia
| | - Valentin Kuleto
- Department of Information Technology, Information Technology School ITS, 11000 Belgrade, Serbia
| |
Collapse
|
4
|
Research on lung nodule recognition algorithm based on deep feature fusion and MKL-SVM-IPSO. Sci Rep 2022; 12:17403. [PMID: 36257988 PMCID: PMC9579155 DOI: 10.1038/s41598-022-22442-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 10/14/2022] [Indexed: 01/10/2023] Open
Abstract
Lung CAD system can provide auxiliary third-party opinions for doctors, improve the accuracy of lung nodule recognition. The selection and fusion of nodule features and the advancement of recognition algorithms are crucial improving lung CAD systems. Based on the HDL model, this paper mainly focuses on the three key algorithms of feature extraction, feature fusion and nodule recognition of lung CAD system. First, CBAM is embedded into VGG16 and VGG19, and feature extraction models AE-VGG16 and AE-VGG19 are constructed, so that the network can pay more attention to the key feature information in nodule description. Then, feature dimensionality reduction based on PCA and feature fusion based on CCA are sequentially performed on the extracted depth features to obtain low-dimensional fusion features. Finally, the fusion features are input into the proposed MKL-SVM-IPSO model based on the improved Particle Swarm Optimization algorithm to speed up the training speed, get the global optimal parameter group. The public dataset LUNA16 was selected for the experiment. The results show that the accuracy of lung nodule recognition of the proposed lung CAD system can reach 99.56%, and the sensitivity and F1-score can reach 99.3% and 0.9965, respectively, which can reduce the possibility of false detection and missed detection of nodules.
Collapse
|
5
|
Du W, Rao N, Yong J, Wang Y, Hu D, Gan T, Zhu L, Zeng B. Improving the Classification Performance of Esophageal Disease on Small Dataset by Semi-supervised Efficient Contrastive Learning. J Med Syst 2021; 46:4. [PMID: 34807297 DOI: 10.1007/s10916-021-01782-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 10/11/2021] [Indexed: 02/05/2023]
Abstract
The classification of esophageal disease based on gastroscopic images is important in the clinical treatment, and is also helpful in providing patients with follow-up treatment plans and preventing lesion deterioration. In recent years, deep learning has achieved many satisfactory results in gastroscopic image classification tasks. However, most of them need a training set that consists of large numbers of images labeled by experienced experts. To reduce the image annotation burdens and improve the classification ability on small labeled gastroscopic image datasets, this study proposed a novel semi-supervised efficient contrastive learning (SSECL) classification method for esophageal disease. First, an efficient contrastive pair generation (ECPG) module was proposed to generate efficient contrastive pairs (ECPs), which took advantage of the high similarity features of images from the same lesion. Then, an unsupervised visual feature representation containing the general feature of esophageal gastroscopic images is learned by unsupervised efficient contrastive learning (UECL). At last, the feature representation will be transferred to the down-stream esophageal disease classification task. The experimental results have demonstrated that the classification accuracy of SSECL is 92.57%, which is better than that of the other state-of-the-art semi-supervised methods and is also higher than the classification method based on transfer learning (TL) by 2.28%. Thus, SSECL has solved the challenging problem of improving the classification result on small gastroscopic image dataset by fully utilizing the unlabeled gastroscopic images and the high similarity information among images from the same lesion. It also brings new insights into medical image classification tasks.
Collapse
Affiliation(s)
- Wenju Du
- Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, 610054, China
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Nini Rao
- Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, 610054, China.
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| | - Jiahao Yong
- Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, 610054, China
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Yingchun Wang
- Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, 610054, China
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Dingcan Hu
- Center for Informational Biology, University of Electronic Science and Technology of China, Chengdu, 610054, China
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Tao Gan
- Digestive Endoscopic Center of West China Hospital, Sichuan University, Chengdu, 610017, China.
| | - Linlin Zhu
- Digestive Endoscopic Center of West China Hospital, Sichuan University, Chengdu, 610017, China
| | - Bing Zeng
- School of Information and Communication Engineering, University Electronic Science and Technology of China, Chengdu, 610054, China
| |
Collapse
|