1
|
Alazwari S, Alsamri J, Asiri MM, Maashi M, Asklany SA, Mahmud A. Computer-aided diagnosis for lung cancer using waterwheel plant algorithm with deep learning. Sci Rep 2024; 14:20647. [PMID: 39232180 PMCID: PMC11375088 DOI: 10.1038/s41598-024-71551-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Accepted: 08/28/2024] [Indexed: 09/06/2024] Open
Abstract
Lung cancer (LC) is a life-threatening and dangerous disease all over the world. However, earlier diagnoses and treatment can save lives. Earlier diagnoses of malevolent cells in the lungs responsible for oxygenating the human body and expelling carbon dioxide due to significant procedures are critical. Even though a computed tomography (CT) scan is the best imaging approach in the healthcare sector, it is challenging for physicians to identify and interpret the tumour from CT scans. LC diagnosis in CT scan using artificial intelligence (AI) can help radiologists in earlier diagnoses, enhance performance, and decrease false negatives. Deep learning (DL) for detecting lymph node contribution on histopathological slides has become popular due to its great significance in patient diagnoses and treatment. This study introduces a computer-aided diagnosis for LC by utilizing the Waterwheel Plant Algorithm with DL (CADLC-WWPADL) approach. The primary aim of the CADLC-WWPADL approach is to classify and identify the existence of LC on CT scans. The CADLC-WWPADL method uses a lightweight MobileNet model for feature extraction. Besides, the CADLC-WWPADL method employs WWPA for the hyperparameter tuning process. Furthermore, the symmetrical autoencoder (SAE) model is utilized for classification. An investigational evaluation is performed to demonstrate the significant detection outputs of the CADLC-WWPADL technique. An extensive comparative study reported that the CADLC-WWPADL technique effectively performs with other models with a maximum accuracy of 99.05% under the benchmark CT image dataset.
Collapse
Affiliation(s)
- Sana Alazwari
- Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, 21944, Taif, Saudi Arabia
| | - Jamal Alsamri
- Department of Biomedical Engineering, College of Engineering, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Mashael M Asiri
- Department of Computer Science, Applied College at Mahayil, King Khalid University, Abha, Saudi Arabia.
| | - Mashael Maashi
- Department of Software Engineering, College of Computer and Information Sciences, King Saud University, PO Box 103786, 11543, Riyadh, Saudi Arabia
| | - Somia A Asklany
- Department of Computer Science and Information Technology, Faculty of Sciences and Arts, Northern Border University, Turaif, 91431, Arar, Saudi Arabia
| | - Ahmed Mahmud
- Research Center, Future University in Egypt, New Cairo, 11835, Egypt
| |
Collapse
|
2
|
Kumaran S Y, Jeya JJ, R MT, Khan SB, Alzahrani S, Alojail M. Explainable lung cancer classification with ensemble transfer learning of VGG16, Resnet50 and InceptionV3 using grad-cam. BMC Med Imaging 2024; 24:176. [PMID: 39030496 PMCID: PMC11264852 DOI: 10.1186/s12880-024-01345-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Accepted: 06/24/2024] [Indexed: 07/21/2024] Open
Abstract
Medical imaging stands as a critical component in diagnosing various diseases, where traditional methods often rely on manual interpretation and conventional machine learning techniques. These approaches, while effective, come with inherent limitations such as subjectivity in interpretation and constraints in handling complex image features. This research paper proposes an integrated deep learning approach utilizing pre-trained models-VGG16, ResNet50, and InceptionV3-combined within a unified framework to improve diagnostic accuracy in medical imaging. The method focuses on lung cancer detection using images resized and converted to a uniform format to optimize performance and ensure consistency across datasets. Our proposed model leverages the strengths of each pre-trained network, achieving a high degree of feature extraction and robustness by freezing the early convolutional layers and fine-tuning the deeper layers. Additionally, techniques like SMOTE and Gaussian Blur are applied to address class imbalance, enhancing model training on underrepresented classes. The model's performance was validated on the IQ-OTH/NCCD lung cancer dataset, which was collected from the Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases over a period of three months in fall 2019. The proposed model achieved an accuracy of 98.18%, with precision and recall rates notably high across all classes. This improvement highlights the potential of integrated deep learning systems in medical diagnostics, providing a more accurate, reliable, and efficient means of disease detection.
Collapse
Affiliation(s)
- Yogesh Kumaran S
- Department of Computer Science & Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bengaluru, 562112, India
| | - J Jospin Jeya
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Ramapuram, Chennai, India
| | - Mahesh T R
- Department of Computer Science & Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bengaluru, 562112, India.
| | - Surbhi Bhatia Khan
- School of science, engineering and environment, University of Salford, Manchester, UK
| | - Saeed Alzahrani
- Management Information System Department, College of Business Administration, King Saud University, Riyadh, Saudi Arabia
| | - Mohammed Alojail
- Management Information System Department, College of Business Administration, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
3
|
Gharaibeh NY, De Fazio R, Al-Naami B, Al-Hinnawi AR, Visconti P. Automated Lung Cancer Diagnosis Applying Butterworth Filtering, Bi-Level Feature Extraction, and Sparce Convolutional Neural Network to Luna 16 CT Images. J Imaging 2024; 10:168. [PMID: 39057739 PMCID: PMC11277772 DOI: 10.3390/jimaging10070168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2024] [Revised: 06/12/2024] [Accepted: 07/08/2024] [Indexed: 07/28/2024] Open
Abstract
Accurate prognosis and diagnosis are crucial for selecting and planning lung cancer treatments. As a result of the rapid development of medical imaging technology, the use of computed tomography (CT) scans in pathology is becoming standard practice. An intricate interplay of requirements and obstacles characterizes computer-assisted diagnosis, which relies on the precise and effective analysis of pathology images. In recent years, pathology image analysis tasks such as tumor region identification, prognosis prediction, tumor microenvironment characterization, and metastasis detection have witnessed the considerable potential of artificial intelligence, especially deep learning techniques. In this context, an artificial intelligence (AI)-based methodology for lung cancer diagnosis is proposed in this research work. As a first processing step, filtering using the Butterworth smooth filter algorithm was applied to the input images from the LUNA 16 lung cancer dataset to remove noise without significantly degrading the image quality. Next, we performed the bi-level feature selection step using the Chaotic Crow Search Algorithm and Random Forest (CCSA-RF) approach to select features such as diameter, margin, spiculation, lobulation, subtlety, and malignancy. Next, the Feature Extraction step was performed using the Multi-space Image Reconstruction (MIR) method with Grey Level Co-occurrence Matrix (GLCM). Next, the Lung Tumor Severity Classification (LTSC) was implemented by using the Sparse Convolutional Neural Network (SCNN) approach with a Probabilistic Neural Network (PNN). The developed method can detect benign, normal, and malignant lung cancer images using the PNN algorithm, which reduces complexity and efficiently provides classification results. Performance parameters, namely accuracy, precision, F-score, sensitivity, and specificity, were determined to evaluate the effectiveness of the implemented hybrid method and compare it with other solutions already present in the literature.
Collapse
Affiliation(s)
- Nasr Y. Gharaibeh
- Department of Electrical Engineering, Al-Balqa Applied University, Salt 21163, Jordan;
| | - Roberto De Fazio
- Department of Innovation Engineering, University of Salento, 73100 Lecce, Italy;
| | - Bassam Al-Naami
- Department of Biomedical Engineering, Faculty of Engineering, The Hashemite University, Zarqa 13133, Jordan;
| | - Abdel-Razzak Al-Hinnawi
- Department of Medical Imaging, Faculty of Allied Medical Sciences, Isra University, Amman 11622, Jordan;
| | - Paolo Visconti
- Department of Innovation Engineering, University of Salento, 73100 Lecce, Italy;
| |
Collapse
|
4
|
Bhatia I, Aarti, Ansarullah SI, Amin F, Alabrah A. An Advanced Lung Carcinoma Prediction and Risk Screening Model Using Transfer Learning. Diagnostics (Basel) 2024; 14:1378. [PMID: 39001268 PMCID: PMC11241604 DOI: 10.3390/diagnostics14131378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Revised: 06/12/2024] [Accepted: 06/18/2024] [Indexed: 07/16/2024] Open
Abstract
Lung cancer, also known as lung carcinoma, has a high death rate, but an early diagnosis can substantially reduce this risk. In the current era, prediction models face challenges such as low accuracy, excessive noise, and low contrast. To resolve these problems, an advanced lung carcinoma prediction and risk screening model using transfer learning is proposed. Our proposed model initially preprocesses lung computed tomography images for noise removal, contrast stretching, convex hull lung region extraction, and edge enhancement. The next phase segments the preprocessed images using the modified Bates distribution coati optimization (B-RGS) algorithm to extract key features. The PResNet classifier then categorizes the cancer as normal or abnormal. For abnormal cases, further risk screening determines whether the risk is low or high. Experimental results depict that our proposed model performs at levels similar to other state-of-the-art models, achieving enhanced accuracy, precision, and recall rates of 98.21%, 98.71%, and 97.46%, respectively. These results validate the efficiency and effectiveness of our suggested methodology in early lung carcinoma prediction and risk assessment.
Collapse
Affiliation(s)
- Isha Bhatia
- Department of Computer Science and Engineering, Lovely Professional University, Phagwara 144001, India; (I.B.); (A.)
| | - Aarti
- Department of Computer Science and Engineering, Lovely Professional University, Phagwara 144001, India; (I.B.); (A.)
| | - Syed Immamul Ansarullah
- Department of IMBA (Integrated Master of Business Administration), North Campus Delina, The University of Kashmir, Srinagar 190001, India;
| | - Farhan Amin
- School of Computer Science and Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
| | - Amerah Alabrah
- Department of Information Systems, College of Computer and Information Science, King Saud University, Riyadh 11543, Saudi Arabia
| |
Collapse
|
5
|
Pathan RK, Shorna IJ, Hossain MS, Khandaker MU, Almohammed HI, Hamd ZY. The efficacy of machine learning models in lung cancer risk prediction with explainability. PLoS One 2024; 19:e0305035. [PMID: 38870229 PMCID: PMC11175504 DOI: 10.1371/journal.pone.0305035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 05/22/2024] [Indexed: 06/15/2024] Open
Abstract
Among many types of cancers, to date, lung cancer remains one of the deadliest cancers around the world. Many researchers, scientists, doctors, and people from other fields continuously contribute to this subject regarding early prediction and diagnosis. One of the significant problems in prediction is the black-box nature of machine learning models. Though the detection rate is comparatively satisfactory, people have yet to learn how a model came to that decision, causing trust issues among patients and healthcare workers. This work uses multiple machine learning models on a numerical dataset of lung cancer-relevant parameters and compares performance and accuracy. After comparison, each model has been explained using different methods. The main contribution of this research is to give logical explanations of why the model reached a particular decision to achieve trust. This research has also been compared with a previous study that worked with a similar dataset and took expert opinions regarding their proposed model. We also showed that our research achieved better results than their proposed model and specialist opinion using hyperparameter tuning, having an improved accuracy of almost 100% in all four models.
Collapse
Affiliation(s)
- Refat Khan Pathan
- Department of Computing and Information Systems, School of Engineering and Technology, Sunway University, Selangor, Malaysia
| | | | - Md. Sayem Hossain
- School of Computing Science, Faculty of Innovation and Technology, Taylor’s University Lakeside Campus, Selangor, Malaysia
| | - Mayeen Uddin Khandaker
- Applied Physics and Radiation Technologies Group, CCDCU, School of Engineering and Technology, Sunway University, Selangor, Malaysia
- Faculty of Graduate Studies, Daffodil International University, Daffodil Smart City, Savar, Dhaka, Bangladesh
| | - Huda I. Almohammed
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Zuhal Y. Hamd
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| |
Collapse
|
6
|
Guo L, Liu L, Zhao Z, Xia X. An improved RIME optimization algorithm for lung cancer image segmentation. Comput Biol Med 2024; 174:108219. [PMID: 38581997 DOI: 10.1016/j.compbiomed.2024.108219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 02/18/2024] [Accepted: 02/25/2024] [Indexed: 04/08/2024]
Abstract
Lung cancer is a prevalent form of cancer worldwide, necessitating early and accurate diagnosis for successful treatment. Within medical imaging processing, image segmentation plays a vital role in medical diagnosis. This study applies swarm intelligence algorithms to segment lung cancer pathological images at three levels. The original algorithm incorporates the Whales' search prey mechanism and a random mutation strategy, resulting in an improved version named WDRIME, which aims to enhance convergence speed and avoid local optima (LO). Additionally, the study introduces a multilevel image segmentation method for lung cancer based on the improved algorithm. WDRIME's performance is showcased by comparing it to the state-of-the-art algorithms in IEEE CEC2014. To design a framework for lung cancer image segmentation, this paper combines the WDRIME algorithm with the multilevel segmentation method. Evaluation of the segmentation results employs metrics such as PSNR, SSIM, and FSIM. Overall, the analysis confirms that the proposed algorithm supersedes others regarding convergence speed and accuracy. This model signifies a high-quality segmentation method and offers practical support for in-depth exploration of lung cancer pathological images.
Collapse
Affiliation(s)
- Lei Guo
- Intensive Care Unit, The Second Affiliated Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, 325088, China.
| | - Lei Liu
- College of Computer Science, Sichuan University, Chengdu, Sichuan, 610065, China.
| | - Zhiguang Zhao
- Department of Pathology, The Second Affiliated Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, 325088, China.
| | - Xiaodong Xia
- Department of Respiratory Medicine, The Second Affiliated Hospital of Wenzhou Medical University, Wenzhou, Zhejiang, 325088, China.
| |
Collapse
|
7
|
Pérez-Cano FD, Parra-Cabrera G, Vilchis-Torres I, Reyes-Lagos JJ, Jiménez-Delgado JJ. Exploring Fracture Patterns: Assessing Representation Methods for Bone Fracture Simulation. J Pers Med 2024; 14:376. [PMID: 38673003 PMCID: PMC11051195 DOI: 10.3390/jpm14040376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2024] [Revised: 03/28/2024] [Accepted: 03/29/2024] [Indexed: 04/28/2024] Open
Abstract
Fracture pattern acquisition and representation in human bones play a crucial role in medical simulation, diagnostics, and treatment planning. This article presents a comprehensive review of methodologies employed in acquiring and representing bone fracture patterns. Several techniques, including segmentation algorithms, curvature analysis, and deep learning-based approaches, are reviewed to determine their effectiveness in accurately identifying fracture zones. Additionally, diverse methods for representing fracture patterns are evaluated. The challenges inherent in detecting accurate fracture zones from medical images, the complexities arising from multifragmentary fractures, and the need to automate fracture reduction processes are elucidated. A detailed analysis of the suitability of each representation method for specific medical applications, such as simulation systems, surgical interventions, and educational purposes, is provided. The study explores insights from a broad spectrum of research articles, encompassing diverse methodologies and perspectives. This review elucidates potential directions for future research and contributes to advancements in comprehending the acquisition and representation of fracture patterns in human bone.
Collapse
Affiliation(s)
| | - Gema Parra-Cabrera
- Department of Computer Science, University of Jaén, 23071 Jaén, Spain; (G.P.-C.); (J.J.J.-D.)
| | - Ivett Vilchis-Torres
- Centro de Investigación Multidisciplinaria en Educación, Universidad Autónoma del Estado de México, Toluca 50110, Mexico;
| | | | | |
Collapse
|
8
|
Lindroth H, Nalaie K, Raghu R, Ayala IN, Busch C, Bhattacharyya A, Moreno Franco P, Diedrich DA, Pickering BW, Herasevich V. Applied Artificial Intelligence in Healthcare: A Review of Computer Vision Technology Application in Hospital Settings. J Imaging 2024; 10:81. [PMID: 38667979 PMCID: PMC11050909 DOI: 10.3390/jimaging10040081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/08/2024] [Accepted: 03/11/2024] [Indexed: 04/28/2024] Open
Abstract
Computer vision (CV), a type of artificial intelligence (AI) that uses digital videos or a sequence of images to recognize content, has been used extensively across industries in recent years. However, in the healthcare industry, its applications are limited by factors like privacy, safety, and ethical concerns. Despite this, CV has the potential to improve patient monitoring, and system efficiencies, while reducing workload. In contrast to previous reviews, we focus on the end-user applications of CV. First, we briefly review and categorize CV applications in other industries (job enhancement, surveillance and monitoring, automation, and augmented reality). We then review the developments of CV in the hospital setting, outpatient, and community settings. The recent advances in monitoring delirium, pain and sedation, patient deterioration, mechanical ventilation, mobility, patient safety, surgical applications, quantification of workload in the hospital, and monitoring for patient events outside the hospital are highlighted. To identify opportunities for future applications, we also completed journey mapping at different system levels. Lastly, we discuss the privacy, safety, and ethical considerations associated with CV and outline processes in algorithm development and testing that limit CV expansion in healthcare. This comprehensive review highlights CV applications and ideas for its expanded use in healthcare.
Collapse
Affiliation(s)
- Heidi Lindroth
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Center for Aging Research, Regenstrief Institute, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
- Center for Health Innovation and Implementation Science, School of Medicine, Indiana University, Indianapolis, IN 46202, USA
| | - Keivan Nalaie
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Roshini Raghu
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Ivan N. Ayala
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
| | - Charles Busch
- Division of Nursing Research, Department of Nursing, Mayo Clinic, Rochester, MN 55905, USA; (K.N.); (R.R.); (I.N.A.); (C.B.)
- College of Engineering, University of Wisconsin-Madison, Madison, WI 53705, USA
| | | | - Pablo Moreno Franco
- Department of Transplantation Medicine, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Daniel A. Diedrich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Brian W. Pickering
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| | - Vitaly Herasevich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55905, USA; (D.A.D.); (B.W.P.); (V.H.)
| |
Collapse
|
9
|
Chato L, Regentova E. Survey of Transfer Learning Approaches in the Machine Learning of Digital Health Sensing Data. J Pers Med 2023; 13:1703. [PMID: 38138930 PMCID: PMC10744730 DOI: 10.3390/jpm13121703] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 12/01/2023] [Accepted: 12/08/2023] [Indexed: 12/24/2023] Open
Abstract
Machine learning and digital health sensing data have led to numerous research achievements aimed at improving digital health technology. However, using machine learning in digital health poses challenges related to data availability, such as incomplete, unstructured, and fragmented data, as well as issues related to data privacy, security, and data format standardization. Furthermore, there is a risk of bias and discrimination in machine learning models. Thus, developing an accurate prediction model from scratch can be an expensive and complicated task that often requires extensive experiments and complex computations. Transfer learning methods have emerged as a feasible solution to address these issues by transferring knowledge from a previously trained task to develop high-performance prediction models for a new task. This survey paper provides a comprehensive study of the effectiveness of transfer learning for digital health applications to enhance the accuracy and efficiency of diagnoses and prognoses, as well as to improve healthcare services. The first part of this survey paper presents and discusses the most common digital health sensing technologies as valuable data resources for machine learning applications, including transfer learning. The second part discusses the meaning of transfer learning, clarifying the categories and types of knowledge transfer. It also explains transfer learning methods and strategies, and their role in addressing the challenges in developing accurate machine learning models, specifically on digital health sensing data. These methods include feature extraction, fine-tuning, domain adaptation, multitask learning, federated learning, and few-/single-/zero-shot learning. This survey paper highlights the key features of each transfer learning method and strategy, and discusses the limitations and challenges of using transfer learning for digital health applications. Overall, this paper is a comprehensive survey of transfer learning methods on digital health sensing data which aims to inspire researchers to gain knowledge of transfer learning approaches and their applications in digital health, enhance the current transfer learning approaches in digital health, develop new transfer learning strategies to overcome the current limitations, and apply them to a variety of digital health technologies.
Collapse
Affiliation(s)
- Lina Chato
- Department of Electrical and Computer Engineering, University of Nevada, Las Vegas, NV 89154, USA;
| | | |
Collapse
|
10
|
Yanina IY, Genin VD, Genina EA, Mudrak DA, Navolokin NA, Bucharskaya AB, Kistenev YV, Tuchin VV. Multimodal Diagnostics of Changes in Rat Lungs after Vaping. Diagnostics (Basel) 2023; 13:3340. [PMID: 37958237 PMCID: PMC10650729 DOI: 10.3390/diagnostics13213340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 10/09/2023] [Accepted: 10/13/2023] [Indexed: 11/15/2023] Open
Abstract
(1) Background: The use of electronic cigarettes has become widespread in recent years. The use of e-cigarettes leads to milder pathological conditions compared to traditional cigarette smoking. Nevertheless, e-liquid vaping can cause morphological changes in lung tissue, which affects and impairs gas exchange. This work studied the changes in morphological and optical properties of lung tissue under the action of an e-liquid aerosol. To do this, we implemented the "passive smoking" model and created the specified concentration of aerosol of the glycerol/propylene glycol mixture in the chamber with the animal. (2) Methods: In ex vivo studies, the lungs of Wistar rats are placed in the e-liquid for 1 h. For in vivo studies, Wistar rats were exposed to the e-liquid vapor in an aerosol administration chamber. After that, lung tissue samples were examined ex vivo using optical coherence tomography (OCT) and spectrometry with an integrating sphere. Absorption and reduced scattering coefficients were estimated for the control and experimental groups. Histological sections were made according to the standard protocol, followed by hematoxylin and eosin staining. (3) Results: Exposure to e-liquid in ex vivo and aerosol in in vivo studies was found to result in the optical clearing of lung tissue. Histological examination of the lung samples showed areas of emphysematous expansion of the alveoli, thickening of the alveolar septa, and the phenomenon of plasma permeation, which is less pronounced in in vivo studies than for the exposure of e-liquid ex vivo. E-liquid aerosol application allows for an increased resolution and improved imaging of lung tissues using OCT. Spectral studies showed significant differences between the control group and the ex vivo group in the spectral range of water absorption. It can be associated with dehydration of lung tissue owing to the hyperosmotic properties of glycerol and propylene glycol, which are the main components of e-liquids. (4) Conclusions: A decrease in the volume of air in lung tissue and higher packing of its structure under e-liquid vaping causes a better contrast of OCT images compared to intact lung tissue.
Collapse
Affiliation(s)
- Irina Yu. Yanina
- Institution of Physics, Saratov State University, 410012 Saratov, Russia; (V.D.G.); (E.A.G.); (V.V.T.)
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, 634050 Tomsk, Russia; (A.B.B.); (Y.V.K.)
| | - Vadim D. Genin
- Institution of Physics, Saratov State University, 410012 Saratov, Russia; (V.D.G.); (E.A.G.); (V.V.T.)
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, 634050 Tomsk, Russia; (A.B.B.); (Y.V.K.)
- Science Medical Center, Saratov State University, 410012 Saratov, Russia
| | - Elina A. Genina
- Institution of Physics, Saratov State University, 410012 Saratov, Russia; (V.D.G.); (E.A.G.); (V.V.T.)
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, 634050 Tomsk, Russia; (A.B.B.); (Y.V.K.)
- Science Medical Center, Saratov State University, 410012 Saratov, Russia
| | - Dmitry A. Mudrak
- Department of Pathological Anatomy, Saratov State Medical University, 410012 Saratov, Russia; (D.A.M.); (N.A.N.)
| | - Nikita A. Navolokin
- Department of Pathological Anatomy, Saratov State Medical University, 410012 Saratov, Russia; (D.A.M.); (N.A.N.)
- Experimental Department, Center for Collective Use of Experimental Oncology, Saratov State Medical University, 410012 Saratov, Russia
- State Healthcare Institution, Saratov City Clinical Hospital No. 1 Named after Yu.Ya. Gordeev, 410017 Saratov, Russia
| | - Alla B. Bucharskaya
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, 634050 Tomsk, Russia; (A.B.B.); (Y.V.K.)
- Science Medical Center, Saratov State University, 410012 Saratov, Russia
- Department of Pathological Anatomy, Saratov State Medical University, 410012 Saratov, Russia; (D.A.M.); (N.A.N.)
| | - Yury V. Kistenev
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, 634050 Tomsk, Russia; (A.B.B.); (Y.V.K.)
| | - Valery V. Tuchin
- Institution of Physics, Saratov State University, 410012 Saratov, Russia; (V.D.G.); (E.A.G.); (V.V.T.)
- Laboratory of Laser Molecular Imaging and Machine Learning, Tomsk State University, 634050 Tomsk, Russia; (A.B.B.); (Y.V.K.)
- Science Medical Center, Saratov State University, 410012 Saratov, Russia
- Institute of Precision Mechanics and Control, FRC “Saratov Scientific Centre of the Russian Academy of Sciences”, 410028 Saratov, Russia
| |
Collapse
|
11
|
Sui G, Zhang Z, Liu S, Chen S, Liu X. Pulmonary nodules segmentation based on domain adaptation. Phys Med Biol 2023; 68:155015. [PMID: 37406634 DOI: 10.1088/1361-6560/ace498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 07/05/2023] [Indexed: 07/07/2023]
Abstract
With the development of deep learning, the methods based on transfer learning have promoted the progress of medical image segmentation. However, the domain shift and complex background information of medical images limit the further improvement of the segmentation accuracy. Domain adaptation can compensate for the sample shortage by learning important information from a similar source dataset. Therefore, a segmentation method based on adversarial domain adaptation with background mask (ADAB) is proposed in this paper. Firstly, two ADAB networks are built for the source and target data segmentation, respectively. Next, to extract the foreground features that are the input of the discriminators, the background masks are generated according to the region growth algorithm. Then, to update the parameters in the target network without being affected by the conflict between the distinguishing differences of the discriminator and the domain shift reduction of the adversarial domain adaptation, a gradient reversal layer propagation is embedded in the ADAB model for the target data. Finally, an enhanced boundaries loss is deduced to make the target network sensitive to the edge of the area to be segmented. The performance of the proposed method is evaluated in the segmentation of pulmonary nodules in computed tomography images. Experimental results show that the proposed approach has a potential prospect in medical image processing.
Collapse
Affiliation(s)
- Guozheng Sui
- College of Automation and Electronic Engineering, Qingdao University of Science and Technology, People's Republic of China
| | - Zaixian Zhang
- Radiology Department, The Affiliated Hospital of Qingdao University, People's Republic of China
| | - Shunli Liu
- Radiology Department, The Affiliated Hospital of Qingdao University, People's Republic of China
| | - Shuang Chen
- College of Automation and Electronic Engineering, Qingdao University of Science and Technology, People's Republic of China
| | - Xuefeng Liu
- College of Automation and Electronic Engineering, Qingdao University of Science and Technology, People's Republic of China
| |
Collapse
|