1
|
Cadrin-Chênevert A. Navigating Clinical Variability: Transfer Learning's Impact on Imaging Model Performance. Radiol Artif Intell 2024; 6:e240263. [PMID: 38900033 DOI: 10.1148/ryai.240263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Affiliation(s)
- Alexandre Cadrin-Chênevert
- From the CISSS Lanaudière-Medical Imaging, 200 Louis-Vadeboncoeur, Saint-Charles-Borromee, QC, Canada J6E 6J2
| |
Collapse
|
2
|
Wen J, An Y, Shao L, Yin L, Peng Z, Liu Y, Tian J, Du Y. Dual-channel end-to-end network with prior knowledge embedding for improving spatial resolution of magnetic particle imaging. Comput Biol Med 2024; 178:108783. [PMID: 38909446 DOI: 10.1016/j.compbiomed.2024.108783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 05/21/2024] [Accepted: 06/15/2024] [Indexed: 06/25/2024]
Abstract
Magnetic particle imaging (MPI) is an emerging non-invasive medical imaging tomography technology based on magnetic particles, with excellent imaging depth penetration, high sensitivity and contrast. Spatial resolution and signal-to-noise ratio (SNR) are key performance metrics for evaluating MPI, which are directly influenced by the gradient of the selection field (SF). Increasing the SF gradient can improve the spatial resolution of MPI, but will lead to a decrease in SNR. Deep learning (DL) methods may enable obtaining high-resolution images from low-resolution images to improve the MPI resolution under low gradient conditions. However, existing DL methods overlook the physical procedures contributing to the blurring of MPI images, resulting in low interpretability and hindering breakthroughs in resolution. To address this issue, we propose a dual-channel end-to-end network with prior knowledge embedding for MPI (DENPK-MPI) to effectively establish a latent mapping between low-gradient and high-gradient images, thus improving MPI resolution without compromising SNR. By seamlessly integrating MPI PSF with DL paradigm, DENPK-MPI leads to a significant improvement in spatial resolution performance. Simulation, phantom, and in vivo MPI experiments have collectively confirmed that our method can improve the resolution of low-gradient MPI images without sacrificing SNR, resulting in a decrease in full width at half maximum by 14.8%-23.8 %, and the accuracy of image reconstruction is 18.2 %-27.3 % higher than other DL methods. In conclusion, we propose a DL method that incorporates MPI prior knowledge, which can improve the spatial resolution of MPI without compromising SNR and possess improved biomedical application.
Collapse
Affiliation(s)
- Jiaxuan Wen
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, China; School of Artificial Intelligence, The University of Chinese Academy of Sciences, Beijing, China
| | - Yu An
- School of Engineering Medicine, Beihang University, Beijing, China; The Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China
| | - Lizhi Shao
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, China; School of Artificial Intelligence, The University of Chinese Academy of Sciences, Beijing, China
| | - Lin Yin
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, China; School of Artificial Intelligence, The University of Chinese Academy of Sciences, Beijing, China
| | - Zhengyao Peng
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, China; School of Artificial Intelligence, The University of Chinese Academy of Sciences, Beijing, China
| | - Yanjun Liu
- School of Engineering Medicine, Beihang University, Beijing, China; The Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, China; School of Engineering Medicine, Beihang University, Beijing, China; The Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China
| | - Yang Du
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, China; School of Artificial Intelligence, The University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
3
|
Zou C, Ji H, Cui J, Qian B, Chen YC, Zhang Q, He S, Sui Y, Bai Y, Zhong Y, Zhang X, Ni T, Che Z. Preliminary study on AI-assisted diagnosis of bone remodeling in chronic maxillary sinusitis. BMC Med Imaging 2024; 24:140. [PMID: 38858631 PMCID: PMC11165780 DOI: 10.1186/s12880-024-01316-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 05/30/2024] [Indexed: 06/12/2024] Open
Abstract
OBJECTIVE To construct the deep learning convolution neural network (CNN) model and machine learning support vector machine (SVM) model of bone remodeling of chronic maxillary sinusitis (CMS) based on CT image data to improve the accuracy of image diagnosis. METHODS Maxillary sinus CT data of 1000 samples in 500 patients from January 2018 to December 2021 in our hospital was collected. The first part is the establishment and testing of chronic maxillary sinusitis detection model by 461 images. The second part is the establishment and testing of the detection model of chronic maxillary sinusitis with bone remodeling by 802 images. The sensitivity, specificity and accuracy and area under the curve (AUC) value of the test set were recorded, respectively. RESULTS Preliminary application results of CT based AI in the diagnosis of chronic maxillary sinusitis and bone remodeling. The sensitivity, specificity and accuracy of the test set of 93 samples of CMS, were 0.9796, 0.8636 and 0.9247, respectively. Simultaneously, the value of AUC was 0.94. And the sensitivity, specificity and accuracy of the test set of 161 samples of CMS with bone remodeling were 0.7353, 0.9685 and 0.9193, respectively. Simultaneously, the value of AUC was 0.89. CONCLUSION It is feasible to use artificial intelligence research methods such as deep learning and machine learning to automatically identify CMS and bone remodeling in MSCT images of paranasal sinuses, which is helpful to standardize imaging diagnosis and meet the needs of clinical application.
Collapse
Affiliation(s)
- Caiyun Zou
- Department of Radiology, Nanjing Tongren Hospital, School of Medicine, Southeast University, No. 2007, Ji Yin Avenue, Jiang Ning District, Nanjing, 211102, PR China
| | - Hongbo Ji
- Department of Radiology, Nanjing Tongren Hospital, School of Medicine, Southeast University, No. 2007, Ji Yin Avenue, Jiang Ning District, Nanjing, 211102, PR China
| | - Jie Cui
- Department of Radiology, Nanjing Tongren Hospital, School of Medicine, Southeast University, No. 2007, Ji Yin Avenue, Jiang Ning District, Nanjing, 211102, PR China
| | - Bo Qian
- Department of Radiology, Nanjing Tongren Hospital, School of Medicine, Southeast University, No. 2007, Ji Yin Avenue, Jiang Ning District, Nanjing, 211102, PR China
| | - Yu-Chen Chen
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Nanjing, PR China
| | - Qingxiang Zhang
- Department of Otolaryngology Head and Neck Surgery, Nanjing Tongren Hospital, School of Medicine, Southeast University, Nanjing, PR China
| | - Shuangba He
- Department of Otolaryngology Head and Neck Surgery, Nanjing Tongren Hospital, School of Medicine, Southeast University, Nanjing, PR China
| | - Yang Sui
- School of Statistics and Management, Shanghai University of Finance and Economics, Shanghai, PR China
| | - Yang Bai
- School of Statistics and Management, Shanghai University of Finance and Economics, Shanghai, PR China
| | - Yeming Zhong
- Department of Radiology, Nanjing Tongren Hospital, School of Medicine, Southeast University, No. 2007, Ji Yin Avenue, Jiang Ning District, Nanjing, 211102, PR China
| | - Xu Zhang
- Department of Radiology, Nanjing Tongren Hospital, School of Medicine, Southeast University, No. 2007, Ji Yin Avenue, Jiang Ning District, Nanjing, 211102, PR China
| | - Ting Ni
- Department of Radiology, Nanjing Tongren Hospital, School of Medicine, Southeast University, No. 2007, Ji Yin Avenue, Jiang Ning District, Nanjing, 211102, PR China
| | - Zigang Che
- Department of Radiology, Nanjing Tongren Hospital, School of Medicine, Southeast University, No. 2007, Ji Yin Avenue, Jiang Ning District, Nanjing, 211102, PR China.
| |
Collapse
|
4
|
Fang K, Zheng X, Lin X, Dai Z. A comprehensive approach for osteoporosis detection through chest CT analysis and bone turnover markers: harnessing radiomics and deep learning techniques. Front Endocrinol (Lausanne) 2024; 15:1296047. [PMID: 38894742 PMCID: PMC11183288 DOI: 10.3389/fendo.2024.1296047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 05/22/2024] [Indexed: 06/21/2024] Open
Abstract
Purpose The main objective of this study is to assess the possibility of using radiomics, deep learning, and transfer learning methods for the analysis of chest CT scans. An additional aim is to combine these techniques with bone turnover markers to identify and screen for osteoporosis in patients. Method A total of 488 patients who had undergone chest CT and bone turnover marker testing, and had known bone mineral density, were included in this study. ITK-SNAP software was used to delineate regions of interest, while radiomics features were extracted using Python. Multiple 2D and 3D deep learning models were trained to identify these regions of interest. The effectiveness of these techniques in screening for osteoporosis in patients was compared. Result Clinical models based on gender, age, and β-cross achieved an accuracy of 0.698 and an AUC of 0.665. Radiomics models, which utilized 14 selected radiomics features, achieved a maximum accuracy of 0.750 and an AUC of 0.739. The test group yielded promising results: the 2D Deep Learning model achieved an accuracy of 0.812 and an AUC of 0.855, while the 3D Deep Learning model performed even better with an accuracy of 0.854 and an AUC of 0.906. Similarly, the 2D Transfer Learning model achieved an accuracy of 0.854 and an AUC of 0.880, whereas the 3D Transfer Learning model exhibited an accuracy of 0.740 and an AUC of 0.737. Overall, the application of 3D deep learning and 2D transfer learning techniques on chest CT scans showed excellent screening performance in the context of osteoporosis. Conclusion Bone turnover markers may not be necessary for osteoporosis screening, as 3D deep learning and 2D transfer learning techniques utilizing chest CT scans proved to be equally effective alternatives.
Collapse
Affiliation(s)
- Kaibin Fang
- Department of Orthopaedic Surgery, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Xiaoling Zheng
- Aviation College, Liming Vocational University, Quanzhou, China
| | - Xiaocong Lin
- Department of Orthopaedic Surgery, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Zhangsheng Dai
- Department of Orthopaedic Surgery, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| |
Collapse
|
5
|
Armato SG, Katz SI, Frauenfelder T, Jayasekera G, Catino A, Blyth KG, Theodoro T, Rousset P, Nackaerts K, Opitz I. Imaging in pleural Mesothelioma: A review of the 16th International Conference of the International Mesothelioma Interest Group. Lung Cancer 2024; 193:107832. [PMID: 38875938 DOI: 10.1016/j.lungcan.2024.107832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Revised: 05/21/2024] [Accepted: 05/27/2024] [Indexed: 06/16/2024]
Abstract
Imaging continues to gain a greater role in the assessment and clinical management of patients with mesothelioma. This communication summarizes the oral presentations from the imaging session at the 2023 International Conference of the International Mesothelioma Interest Group (iMig), which was held in Lille, France from June 26 to 28, 2023. Topics at this session included an overview of best practices for clinical imaging of mesothelioma as reported by an iMig consensus panel, emerging imaging techniques for surgical planning, radiologic assessment of malignant pleural effusion, a radiomics-based transfer learning model to predict patient response to treatment, automated assessment of early contrast enhancement, and tumor thickness for response assessment in peritoneal mesothelioma.
Collapse
Affiliation(s)
- Samuel G Armato
- Department of Radiology, The University of Chicago, Chicago, IL, USA.
| | - Sharyn I Katz
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Thomas Frauenfelder
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Zurich, Switzerland
| | - Geeshath Jayasekera
- Glasgow Pleural Disease Unit, Queen Elizabeth University Hospital, Glasgow, UK and School of Cancer Sciences, University of Glasgow, UK
| | - Annamaria Catino
- Medical Thoracic Oncology Unit, IRCCS Istituto Tumori "Giovanni Paolo II," BARI, Italy
| | - Kevin G Blyth
- Cancer Research UK Scotland Centre, Glasgow, UK and Glasgow Pleural Disease Unit, Queen Elizabeth University Hospital, Glasgow, UK and School of Cancer Sciences, University of Glasgow, UK
| | - Taylla Theodoro
- Institute of Computing, University of Campinas, Campinas, Brazil and Cancer Research UK Scotland Centre, Glasgow, UK
| | - Pascal Rousset
- Department of Radiology, Lyon Sud University Hospital, Hospices Civils de Lyon, Lyon 1 University, Pierre-Bénite, France
| | - Kristiaan Nackaerts
- Department of Pulmonology/Respiratory Oncology, KU Leuven, University Hospitals Leuven, Leuven, Belgium
| | - Isabelle Opitz
- Department of Thoracic Surgery, University Hospital Zurich, Zurich, Switzerland
| |
Collapse
|
6
|
Saha A, Ganie SM, Dutta Pramanik PK, Yadav RK, Mallik S, Zhao Z. Correction: VER-Net: a hybrid transfer learning model for lung cancer detection using CT scan images. BMC Med Imaging 2024; 24:128. [PMID: 38822231 PMCID: PMC11140995 DOI: 10.1186/s12880-024-01315-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/02/2024] Open
Affiliation(s)
- Anindita Saha
- Department of Computing Science and Engineering, IFTM University, Moradabad, Uttar Pradesh, India
| | - Shahid Mohammad Ganie
- AI Research Centre, Department of Analytics, School of Business, Woxsen University, Hyderabad, Telangana, 502345, India
| | - Pijush Kanti Dutta Pramanik
- School of Computer Applications and Technology, Galgotias University, Greater Noida, Uttar Pradesh, 203201, India.
| | - Rakesh Kumar Yadav
- Department of Computer Science & Engineering, MSOET, Maharishi University of Information Technology, Lucknow, Uttar Pradesh, India
| | - Saurav Mallik
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, USA
| | - Zhongming Zhao
- Center for Precision Health, McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA.
| |
Collapse
|
7
|
Saha A, Ganie SM, Pramanik PKD, Yadav RK, Mallik S, Zhao Z. VER-Net: a hybrid transfer learning model for lung cancer detection using CT scan images. BMC Med Imaging 2024; 24:120. [PMID: 38789925 PMCID: PMC11127393 DOI: 10.1186/s12880-024-01238-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 03/05/2024] [Indexed: 05/26/2024] Open
Abstract
BACKGROUND Lung cancer is the second most common cancer worldwide, with over two million new cases per year. Early identification would allow healthcare practitioners to handle it more effectively. The advancement of computer-aided detection systems significantly impacted clinical analysis and decision-making on human disease. Towards this, machine learning and deep learning techniques are successfully being applied. Due to several advantages, transfer learning has become popular for disease detection based on image data. METHODS In this work, we build a novel transfer learning model (VER-Net) by stacking three different transfer learning models to detect lung cancer using lung CT scan images. The model is trained to map the CT scan images with four lung cancer classes. Various measures, such as image preprocessing, data augmentation, and hyperparameter tuning, are taken to improve the efficacy of VER-Net. All the models are trained and evaluated using multiclass classifications chest CT images. RESULTS The experimental results confirm that VER-Net outperformed the other eight transfer learning models compared with. VER-Net scored 91%, 92%, 91%, and 91.3% when tested for accuracy, precision, recall, and F1-score, respectively. Compared to the state-of-the-art, VER-Net has better accuracy. CONCLUSION VER-Net is not only effectively used for lung cancer detection but may also be useful for other diseases for which CT scan images are available.
Collapse
Affiliation(s)
- Anindita Saha
- Department of Computing Science and Engineering, IFTM University, Moradabad, Uttar Pradesh, India
| | - Shahid Mohammad Ganie
- AI Research Centre, Department of Analytics, School of Business, Woxsen University, Hyderabad, Telangana, 502345, India
| | - Pijush Kanti Dutta Pramanik
- School of Computer Applications and Technology, Galgotias University, Greater Noida, Uttar Pradesh, 203201, India.
| | - Rakesh Kumar Yadav
- Department of Computer Science & Engineering, MSOET, Maharishi University of Information Technology, Lucknow, Uttar Pradesh, India
| | - Saurav Mallik
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, USA
| | - Zhongming Zhao
- Center for Precision Health, McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA.
| |
Collapse
|
8
|
Sheng H, Ma L, Samson JF, Liu D. BarlowTwins-CXR: enhancing chest X-ray abnormality localization in heterogeneous data with cross-domain self-supervised learning. BMC Med Inform Decis Mak 2024; 24:126. [PMID: 38755563 PMCID: PMC11097466 DOI: 10.1186/s12911-024-02529-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 05/07/2024] [Indexed: 05/18/2024] Open
Abstract
BACKGROUND Chest X-ray imaging based abnormality localization, essential in diagnosing various diseases, faces significant clinical challenges due to complex interpretations and the growing workload of radiologists. While recent advances in deep learning offer promising solutions, there is still a critical issue of domain inconsistency in cross-domain transfer learning, which hampers the efficiency and accuracy of diagnostic processes. This study aims to address the domain inconsistency problem and improve autonomic abnormality localization performance of heterogeneous chest X-ray image analysis, particularly in detecting abnormalities, by developing a self-supervised learning strategy called "BarlwoTwins-CXR". METHODS We utilized two publicly available datasets: the NIH Chest X-ray Dataset and the VinDr-CXR. The BarlowTwins-CXR approach was conducted in a two-stage training process. Initially, self-supervised pre-training was performed using an adjusted Barlow Twins algorithm on the NIH dataset with a Resnet50 backbone pre-trained on ImageNet. This was followed by supervised fine-tuning on the VinDr-CXR dataset using Faster R-CNN with Feature Pyramid Network (FPN). The study employed mean Average Precision (mAP) at an Intersection over Union (IoU) of 50% and Area Under the Curve (AUC) for performance evaluation. RESULTS Our experiments showed a significant improvement in model performance with BarlowTwins-CXR. The approach achieved a 3% increase in mAP50 accuracy compared to traditional ImageNet pre-trained models. In addition, the Ablation CAM method revealed enhanced precision in localizing chest abnormalities. The study involved 112,120 images from the NIH dataset and 18,000 images from the VinDr-CXR dataset, indicating robust training and testing samples. CONCLUSION BarlowTwins-CXR significantly enhances the efficiency and accuracy of chest X-ray image-based abnormality localization, outperforming traditional transfer learning methods and effectively overcoming domain inconsistency in cross-domain scenarios. Our experiment results demonstrate the potential of using self-supervised learning to improve the generalizability of models in medical settings with limited amounts of heterogeneous data. This approach can be instrumental in aiding radiologists, particularly in high-workload environments, offering a promising direction for future AI-driven healthcare solutions.
Collapse
Affiliation(s)
- Haoyue Sheng
- Département d'informatique et de recherche opérationnelle, Université de Montréal, 2920 chemin de la Tour, Montréal, H3T 1J4, QC, Canada.
- Mila - Quebec AI Institute, 6666 Rue Saint-Urbain, Montréal, H2S 3H1, QC, Canada.
- Direction des ressources informationnelles, CIUSSS du Centre-Sud-de-l'Île-de-Montréal, 400 Blvd. De Maisonneuve Ouest, Montréal, H3A 1L4, QC, Canada.
| | - Linrui Ma
- Département d'informatique et de recherche opérationnelle, Université de Montréal, 2920 chemin de la Tour, Montréal, H3T 1J4, QC, Canada
- Mila - Quebec AI Institute, 6666 Rue Saint-Urbain, Montréal, H2S 3H1, QC, Canada
| | - Jean-François Samson
- Direction des ressources informationnelles, CIUSSS du Centre-Sud-de-l'Île-de-Montréal, 400 Blvd. De Maisonneuve Ouest, Montréal, H3A 1L4, QC, Canada
| | - Dianbo Liu
- Mila - Quebec AI Institute, 6666 Rue Saint-Urbain, Montréal, H2S 3H1, QC, Canada
- School of Medicine and College of Design and Engineering, National University of Singapore, 21 Lower Kent Ridge Rd, Singapore, 119077, SG, Singapore
| |
Collapse
|
9
|
Magboo VPC, Magboo MSA. SPECT-MPI for Coronary Artery Disease: A Deep Learning Approach. ACTA MEDICA PHILIPPINA 2024; 58:67-75. [PMID: 38812768 PMCID: PMC11132284 DOI: 10.47895/amp.vi0.7582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/31/2024]
Abstract
Background Worldwide, coronary artery disease (CAD) is a leading cause of mortality and morbidity and remains to be a top health priority in many countries. A non-invasive imaging modality for diagnosis of CAD such as single photon emission computed tomography-myocardial perfusion imaging (SPECT-MPI) is usually requested by cardiologists as it displays radiotracer distribution in the heart reflecting myocardial perfusion. The interpretation of SPECT-MPI is done visually by a nuclear medicine physician and is largely dependent on his clinical experience and showing significant inter-observer variability. Objective The aim of the study is to apply a deep learning approach in the classification of SPECT-MPI for perfusion abnormalities using convolutional neural networks (CNN). Methods A publicly available anonymized SPECT-MPI from a machine learning repository (https://www.kaggle.com/selcankaplan/spect-mpi) was used in this study involving 192 patients who underwent stress-test-rest Tc99m MPI. An exploratory approach of CNN hyperparameter selection to search for optimum neural network model was utilized with particular focus on various dropouts (0.2, 0.5, 0.7), batch sizes (8, 16, 32, 64), and number of dense nodes (32, 64, 128, 256). The base CNN model was also compared with the commonly used pre-trained CNNs in medical images such as VGG16, InceptionV3, DenseNet121 and ResNet50. All simulations experiments were performed in Kaggle using TensorFlow 2.6.0., Keras 2.6.0, and Python language 3.7.10. Results The best performing base CNN model with parameters consisting of 0.7 dropout, batch size 8, and 32 dense nodes generated the highest normalized Matthews Correlation Coefficient at 0.909 and obtained 93.75% accuracy, 96.00% sensitivity, 96.00% precision, and 96.00% F1-score. It also obtained higher classification performance as compared to the pre-trained architectures. Conclusions The results suggest that deep learning approaches through the use of CNN models can be deployed by nuclear medicine physicians in their clinical practice to further augment their decision skills in the interpretation of SPECT-MPI tests. These CNN models can also be used as a dependable and valid second opinion that can aid physicians as a decision-support tool as well as serve as teaching or learning materials for the less-experienced physicians particularly those still in their training career. These highlights the clinical utility of deep learning approaches through CNN models in the practice of nuclear cardiology.
Collapse
Affiliation(s)
- Vincent Peter C Magboo
- Department of Physical Sciences and Mathematics, College of Arts and Sciences, University of the Philippines Manila
| | - Ma Sheila A Magboo
- Department of Physical Sciences and Mathematics, College of Arts and Sciences, University of the Philippines Manila
| |
Collapse
|
10
|
Chiu YJ. Automated medication verification system (AMVS): System based on edge detection and CNN classification drug on embedded systems. Heliyon 2024; 10:e30486. [PMID: 38742071 PMCID: PMC11089321 DOI: 10.1016/j.heliyon.2024.e30486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Accepted: 04/28/2024] [Indexed: 05/16/2024] Open
Abstract
A novel automated medication verification system (AMVS) aims to address the limitation of manual medication verification among healthcare professionals with a high workload, thereby reducing medication errors in hospitals. Specifically, the manual medication verification process is time-consuming and prone to errors, especially in healthcare settings with high workloads. The proposed system strategy is to streamline and automate this process, enhancing efficiency and reducing medication errors. The system employs deep learning models to swiftly and accurately classify multiple medications within a single image without requiring manual labeling during model construction. It comprises edge detection and classification to verify medication types. Unlike previous studies conducted in open spaces, our study takes place in a closed space to minimize the impact of optical changes on image capture. During the experimental process, the system individually identifies each drug within the image by edge detection method and utilizes a classification model to determine each drug type. Our research has successfully developed a fully automated drug recognition system, achieving an accuracy of over 95 % in identifying drug types and conducting segmentation analyses. Specifically, the system demonstrates an accuracy rate of approximately 96 % for drug sets containing fewer than ten types and 93 % for those with ten types. This verification system builds an image classification model quickly. It holds promising potential in assisting nursing staff during AMVS, thereby reducing the likelihood of medication errors and alleviating the burden on nursing staff.
Collapse
Affiliation(s)
- Yen-Jung Chiu
- Department of Biomedical Engineering, Ming Chuan University, Taoyuan, 333, Taiwan
| |
Collapse
|
11
|
Lee MR, Kao MH, Hsieh YC, Sun M, Tang KT, Wang JY, Ho CC, Shih JY, Yu CJ. Cross-site validation of lung cancer diagnosis by electronic nose with deep learning: a multicenter prospective study. Respir Res 2024; 25:203. [PMID: 38730430 PMCID: PMC11084132 DOI: 10.1186/s12931-024-02840-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 05/06/2024] [Indexed: 05/12/2024] Open
Abstract
BACKGROUND Although electronic nose (eNose) has been intensively investigated for diagnosing lung cancer, cross-site validation remains a major obstacle to be overcome and no studies have yet been performed. METHODS Patients with lung cancer, as well as healthy control and diseased control groups, were prospectively recruited from two referral centers between 2019 and 2022. Deep learning models for detecting lung cancer with eNose breathprint were developed using training cohort from one site and then tested on cohort from the other site. Semi-Supervised Domain-Generalized (Semi-DG) Augmentation (SDA) and Noise-Shift Augmentation (NSA) methods with or without fine-tuning was applied to improve performance. RESULTS In this study, 231 participants were enrolled, comprising a training/validation cohort of 168 individuals (90 with lung cancer, 16 healthy controls, and 62 diseased controls) and a test cohort of 63 individuals (28 with lung cancer, 10 healthy controls, and 25 diseased controls). The model has satisfactory results in the validation cohort from the same hospital while directly applying the trained model to the test cohort yielded suboptimal results (AUC, 0.61, 95% CI: 0.47─0.76). The performance improved after applying data augmentation methods in the training cohort (SDA, AUC: 0.89 [0.81─0.97]; NSA, AUC:0.90 [0.89─1.00]). Additionally, after applying fine-tuning methods, the performance further improved (SDA plus fine-tuning, AUC:0.95 [0.89─1.00]; NSA plus fine-tuning, AUC:0.95 [0.90─1.00]). CONCLUSION Our study revealed that deep learning models developed for eNose breathprint can achieve cross-site validation with data augmentation and fine-tuning. Accordingly, eNose breathprints emerge as a convenient, non-invasive, and potentially generalizable solution for lung cancer detection. CLINICAL TRIAL REGISTRATION This study is not a clinical trial and was therefore not registered.
Collapse
Affiliation(s)
- Meng-Rui Lee
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
- Department of Internal Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsin-Chu, Taiwan
| | - Mu-Hsiang Kao
- Department. of Electrical Engineering, National Tsing Hua University, No. 101, Sec. 2, Kuang-Fu Road, Hsinchu, 30013, Taiwan
| | - Ya-Chu Hsieh
- Department. of Electrical Engineering, National Tsing Hua University, No. 101, Sec. 2, Kuang-Fu Road, Hsinchu, 30013, Taiwan
| | - Min Sun
- Department. of Electrical Engineering, National Tsing Hua University, No. 101, Sec. 2, Kuang-Fu Road, Hsinchu, 30013, Taiwan.
| | - Kea-Tiong Tang
- Department. of Electrical Engineering, National Tsing Hua University, No. 101, Sec. 2, Kuang-Fu Road, Hsinchu, 30013, Taiwan.
| | - Jann-Yuan Wang
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Chao-Chi Ho
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Jin-Yuan Shih
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Chong-Jen Yu
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
- Department of Internal Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsin-Chu, Taiwan
| |
Collapse
|
12
|
Cheng CT, Ooyang CH, Kang SC, Liao CH. Applications of Deep Learning in Trauma Radiology: A Narrative Review. Biomed J 2024:100743. [PMID: 38679199 DOI: 10.1016/j.bj.2024.100743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/26/2024] [Accepted: 04/24/2024] [Indexed: 05/01/2024] Open
Abstract
Diagnostic imaging is essential in modern trauma care for initial evaluation and identifying injuries requiring intervention. Deep learning (DL) has become mainstream in medical image analysis and has shown promising efficacy for classification, segmentation, and lesion detection. This narrative review provides the fundamental concepts for developing DL algorithms in trauma imaging and presents an overview of current progress in each modality. DL has been applied to detect free fluid on Focused Assessment with Sonography for Trauma (FAST), traumatic findings on chest and pelvic X-rays, and computed tomography (CT) scans, identify intracranial hemorrhage on head CT, detect vertebral fractures, and identify injuries to organs like the spleen, liver, and lungs on abdominal and chest CT. Future directions involve expanding dataset size and diversity through federated learning, enhancing model explainability and transparency to build clinician trust, and integrating multimodal data to provide more meaningful insights into traumatic injuries. Though some commercial artificial intelligence products are Food and Drug Administration-approved for clinical use in the trauma field, adoption remains limited, highlighting the need for multi-disciplinary teams to engineer practical, real-world solutions. Overall, DL shows immense potential to improve the efficiency and accuracy of trauma imaging, but thoughtful development and validation are critical to ensure these technologies positively impact patient care.
Collapse
Affiliation(s)
- Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| | - Chun-Hsiang Ooyang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| | - Shih-Ching Kang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan.
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Chang Gung University, Taoyuan Taiwan
| |
Collapse
|
13
|
Gu C, Lee M. Deep Transfer Learning Using Real-World Image Features for Medical Image Classification, with a Case Study on Pneumonia X-ray Images. Bioengineering (Basel) 2024; 11:406. [PMID: 38671827 PMCID: PMC11048359 DOI: 10.3390/bioengineering11040406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 04/14/2024] [Accepted: 04/18/2024] [Indexed: 04/28/2024] Open
Abstract
Deep learning has profoundly influenced various domains, particularly medical image analysis. Traditional transfer learning approaches in this field rely on models pretrained on domain-specific medical datasets, which limits their generalizability and accessibility. In this study, we propose a novel framework called real-world feature transfer learning, which utilizes backbone models initially trained on large-scale general-purpose datasets such as ImageNet. We evaluate the effectiveness and robustness of this approach compared to models trained from scratch, focusing on the task of classifying pneumonia in X-ray images. Our experiments, which included converting grayscale images to RGB format, demonstrate that real-world-feature transfer learning consistently outperforms conventional training approaches across various performance metrics. This advancement has the potential to accelerate deep learning applications in medical imaging by leveraging the rich feature representations learned from general-purpose pretrained models. The proposed methodology overcomes the limitations of domain-specific pretrained models, thereby enabling accelerated innovation in medical diagnostics and healthcare. From a mathematical perspective, we formalize the concept of real-world feature transfer learning and provide a rigorous mathematical formulation of the problem. Our experimental results provide empirical evidence supporting the effectiveness of this approach, laying the foundation for further theoretical analysis and exploration. This work contributes to the broader understanding of feature transferability across domains and has significant implications for the development of accurate and efficient models for medical image analysis, even in resource-constrained settings.
Collapse
Affiliation(s)
- Chanhoe Gu
- Department of Intelligent Semiconductor Engineering, Chung-Ang University, Seoul 06974, Republic of Korea;
| | - Minhyeok Lee
- Department of Intelligent Semiconductor Engineering, Chung-Ang University, Seoul 06974, Republic of Korea;
- School of Electrical and Electronics Engineering, Chung-Ang University, Seoul 06974, Republic of Korea
| |
Collapse
|
14
|
Liu W, Wang D, Liu L, Zhou Z. Assessing the Influence of B-US, CDFI, SE, and Patient Age on Predicting Molecular Subtypes in Breast Lesions Using Deep Learning Algorithms. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2024. [PMID: 38581195 DOI: 10.1002/jum.16460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 03/01/2024] [Accepted: 03/25/2024] [Indexed: 04/08/2024]
Abstract
OBJECTIVES Our study aims to investigate the impact of B-mode ultrasound (B-US) imaging, color Doppler flow imaging (CDFI), strain elastography (SE), and patient age on the prediction of molecular subtypes in breast lesions. METHODS Totally 2272 multimodal ultrasound imaging was collected from 198 patients. The ResNet-18 network was employed to predict four molecular subtypes from B-US imaging, CDFI, and SE of patients with different ages. All the images were split into training and testing datasets by the ratio of 80%:20%. The predictive performance on testing dataset was evaluated through 5 metrics including mean accuracy, precision, recall, F1-scores, and confusion matrix. RESULTS Based on B-US imaging, the test mean accuracy is 74.50%, the precision is 74.84%, the recall is 72.48%, and the F1-scores is 0.73. By combining B-US imaging with CDFI, the results were increased to 85.41%, 85.03%, 85.05%, and 0.84, respectively. With the integration of B-US imaging and SE, the results were changed to 75.64%, 74.69%, 73.86%, and 0.74, respectively. Using images from patients under 40 years old, the results were 90.48%, 90.88%, 88.47%, and 0.89. When images from patients who are above 40 years old, they were changed to 81.96%, 83.12%, 80.5%, and 0.81, respectively. CONCLUSION Multimodal ultrasound imaging can be used to accurately predict the molecular subtypes of breast lesions. In addition to B-US imaging, CDFI rather than SE contribute further to improve predictive performance. The predictive performance is notably better for patients under 40 years old compared with those who are 40 years old and above.
Collapse
Affiliation(s)
- Weiyong Liu
- Department of Ultrasound, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Dongyue Wang
- School of Management, Hefei University of Technology, Hefei, China
- Key Laboratory of Process Optimization and Intelligent Decision-Making, Ministry of Education, Hefei, China
- Ministry of Education Engineering Research Center for Intelligent Decision-Making & Information System Technologies, Hefei, China
| | - Le Liu
- Department of Ultrasound, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Zhiguo Zhou
- Reliable Intelligence and Medical Innovation Laboratory (RIMI Lab), Department of Biostatistics & Data Science, University of Kansas Medical Center, Kansas City, Kansas, USA
| |
Collapse
|
15
|
Turon G, Njoroge M, Mulubwa M, Duran-Frigola M, Chibale K. AI can help to tailor drugs for Africa - but Africans should lead the way. Nature 2024; 628:265-267. [PMID: 38594395 DOI: 10.1038/d41586-024-01001-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2024]
|
16
|
Carter D, Bykhovsky D, Hasky A, Mamistvalov I, Zimmer Y, Ram E, Hoffer O. Convolutional neural network deep learning model accurately detects rectal cancer in endoanal ultrasounds. Tech Coloproctol 2024; 28:44. [PMID: 38561492 PMCID: PMC10984882 DOI: 10.1007/s10151-024-02917-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 03/06/2024] [Indexed: 04/04/2024]
Abstract
BACKGROUND Imaging is vital for assessing rectal cancer, with endoanal ultrasound (EAUS) being highly accurate in large tertiary medical centers. However, EAUS accuracy drops outside such settings, possibly due to varied examiner experience and fewer examinations. This underscores the need for an AI-based system to enhance accuracy in non-specialized centers. This study aimed to develop and validate deep learning (DL) models to differentiate rectal cancer in standard EAUS images. METHODS A transfer learning approach with fine-tuned DL architectures was employed, utilizing a dataset of 294 images. The performance of DL models was assessed through a tenfold cross-validation. RESULTS The DL diagnostics model exhibited a sensitivity and accuracy of 0.78 each. In the identification phase, the automatic diagnostic platform achieved an area under the curve performance of 0.85 for diagnosing rectal cancer. CONCLUSIONS This research demonstrates the potential of DL models in enhancing rectal cancer detection during EAUS, especially in settings with lower examiner experience. The achieved sensitivity and accuracy suggest the viability of incorporating AI support for improved diagnostic outcomes in non-specialized medical centers.
Collapse
Affiliation(s)
- D Carter
- Department of Gastroenterology, Chaim Sheba Medical Center, Ramat Gan, Israel.
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.
| | - D Bykhovsky
- Electrical and Electronics Engineering Department, Shamoon College of Engineering, Beer-Sheba, Israel
| | - A Hasky
- School of Electrical Engineering, Afeka College of Engineering, Tel Aviv, Israel
| | - I Mamistvalov
- School of Electrical Engineering, Afeka College of Engineering, Tel Aviv, Israel
| | - Y Zimmer
- School of Medical Engineering, Afeka College of Engineering, Tel Aviv, Israel
| | - E Ram
- Department of Gastroenterology, Chaim Sheba Medical Center, Ramat Gan, Israel
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - O Hoffer
- School of Electrical Engineering, Afeka College of Engineering, Tel Aviv, Israel
| |
Collapse
|
17
|
Bottomly D, McWeeney S. Just how transformative will AI/ML be for immuno-oncology? J Immunother Cancer 2024; 12:e007841. [PMID: 38531545 DOI: 10.1136/jitc-2023-007841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/15/2024] [Indexed: 03/28/2024] Open
Abstract
Immuno-oncology involves the study of approaches which harness the patient's immune system to fight malignancies. Immuno-oncology, as with every other biomedical and clinical research field as well as clinical operations, is in the midst of technological revolutions, which vastly increase the amount of available data. Recent advances in artificial intelligence and machine learning (AI/ML) have received much attention in terms of their potential to harness available data to improve insights and outcomes in many areas including immuno-oncology. In this review, we discuss important aspects to consider when evaluating the potential impact of AI/ML applications in the clinic. We highlight four clinical/biomedical challenges relevant to immuno-oncology and how they may be able to be addressed by the latest advancements in AI/ML. These challenges include (1) efficiency in clinical workflows, (2) curation of high-quality image data, (3) finding, extracting and synthesizing text knowledge as well as addressing, and (4) small cohort size in immunotherapeutic evaluation cohorts. Finally, we outline how advancements in reinforcement and federated learning, as well as the development of best practices for ethical and unbiased data generation, are likely to drive future innovations.
Collapse
Affiliation(s)
- Daniel Bottomly
- Knight Cancer Institute, Oregon Health and Science University, Portland, Oregon, USA
| | - Shannon McWeeney
- Knight Cancer Institute, Oregon Health and Science University, Portland, Oregon, USA
| |
Collapse
|
18
|
Alzubaidi L, Salhi A, A.Fadhel M, Bai J, Hollman F, Italia K, Pareyon R, Albahri AS, Ouyang C, Santamaría J, Cutbush K, Gupta A, Abbosh A, Gu Y. Trustworthy deep learning framework for the detection of abnormalities in X-ray shoulder images. PLoS One 2024; 19:e0299545. [PMID: 38466693 PMCID: PMC10927121 DOI: 10.1371/journal.pone.0299545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 02/12/2024] [Indexed: 03/13/2024] Open
Abstract
Musculoskeletal conditions affect an estimated 1.7 billion people worldwide, causing intense pain and disability. These conditions lead to 30 million emergency room visits yearly, and the numbers are only increasing. However, diagnosing musculoskeletal issues can be challenging, especially in emergencies where quick decisions are necessary. Deep learning (DL) has shown promise in various medical applications. However, previous methods had poor performance and a lack of transparency in detecting shoulder abnormalities on X-ray images due to a lack of training data and better representation of features. This often resulted in overfitting, poor generalisation, and potential bias in decision-making. To address these issues, a new trustworthy DL framework has been proposed to detect shoulder abnormalities (such as fractures, deformities, and arthritis) using X-ray images. The framework consists of two parts: same-domain transfer learning (TL) to mitigate imageNet mismatch and feature fusion to reduce error rates and improve trust in the final result. Same-domain TL involves training pre-trained models on a large number of labelled X-ray images from various body parts and fine-tuning them on the target dataset of shoulder X-ray images. Feature fusion combines the extracted features with seven DL models to train several ML classifiers. The proposed framework achieved an excellent accuracy rate of 99.2%, F1Score of 99.2%, and Cohen's kappa of 98.5%. Furthermore, the accuracy of the results was validated using three visualisation tools, including gradient-based class activation heat map (Grad CAM), activation visualisation, and locally interpretable model-independent explanations (LIME). The proposed framework outperformed previous DL methods and three orthopaedic surgeons invited to classify the test set, who obtained an average accuracy of 79.1%. The proposed framework has proven effective and robust, improving generalisation and increasing trust in the final results.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Centre for Data Science, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | - Asma Salhi
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | | | - Jinshuai Bai
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - Freek Hollman
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - Kristine Italia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | - Roberto Pareyon
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - A. S. Albahri
- Technical College, Imam Ja’afar Al-Sadiq University, Baghdad, Iraq
| | - Chun Ouyang
- School of Information Systems, Queensland University of Technology, Brisbane, QLD, Australia
| | - Jose Santamaría
- Department of Computer Science, University of Jaén, Jaén, Spain
| | - Kenneth Cutbush
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- School of Medicine, The University of Queensland, Brisbane, QLD, Australia
| | - Ashish Gupta
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
- Greenslopes Private Hospital, Brisbane, QLD, Australia
| | - Amin Abbosh
- School of Information Technology and Electrical Engineering, Brisbane, QLD, Australia
| | - Yuantong Gu
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| |
Collapse
|
19
|
Shou Q, Zhao C, Shao X, Herting MM, Wang DJ. High Resolution Multi-delay Arterial Spin Labeling with Transformer based Denoising for Pediatric Perfusion MRI. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.03.04.24303727. [PMID: 38496517 PMCID: PMC10942515 DOI: 10.1101/2024.03.04.24303727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
Multi-delay arterial spin labeling (MDASL) can quantitatively measure cerebral blood flow (CBF) and arterial transit time (ATT), which is particularly suitable for pediatric perfusion imaging. Here we present a high resolution (iso-2mm) MDASL protocol and performed test-retest scans on 21 typically developing children aged 8 to 17 years. We further proposed a Transformer-based deep learning (DL) model with k-space weighted image average (KWIA) denoised images as reference for training the model. The performance of the model was evaluated by the SNR of perfusion images, as well as the SNR, bias and repeatability of the fitted CBF and ATT maps. The proposed method was compared to several benchmark methods including KWIA, joint denoising and reconstruction with total generalized variation (TGV) regularization, as well as directly applying a pretrained Transformer model on a larger dataset. The results show that the proposed Transformer model with KWIA reference can effectively denoise multi-delay ASL images, not only improving the SNR for perfusion images of each delay, but also improving the SNR for the fitted CBF and ATT maps. The proposed method also improved test-retest repeatability of whole-brain perfusion measurements. This may facilitate the use of MDASL in neurodevelopmental studies to characterize typical and aberrant brain development.
Collapse
Affiliation(s)
- Qinyang Shou
- University of Southern California, Los Angeles, California 90033 USA
| | - Chenyang Zhao
- University of Southern California, Los Angeles, California 90033 USA
| | - Xingfeng Shao
- University of Southern California, Los Angeles, California 90033 USA
| | - Megan M Herting
- University of Southern California, Los Angeles, California 90033 USA
| | - Danny Jj Wang
- University of Southern California, Los Angeles, California 90033 USA
| |
Collapse
|
20
|
Shao X, Ge X, Gao J, Niu R, Shi Y, Shao X, Jiang Z, Li R, Wang Y. Transfer learning-based PET/CT three-dimensional convolutional neural network fusion of image and clinical information for prediction of EGFR mutation in lung adenocarcinoma. BMC Med Imaging 2024; 24:54. [PMID: 38438844 PMCID: PMC10913633 DOI: 10.1186/s12880-024-01232-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 02/21/2024] [Indexed: 03/06/2024] Open
Abstract
BACKGROUND To introduce a three-dimensional convolutional neural network (3D CNN) leveraging transfer learning for fusing PET/CT images and clinical data to predict EGFR mutation status in lung adenocarcinoma (LADC). METHODS Retrospective data from 516 LADC patients, encompassing preoperative PET/CT images, clinical information, and EGFR mutation status, were divided into training (n = 404) and test sets (n = 112). Several deep learning models were developed utilizing transfer learning, involving CT-only and PET-only models. A dual-stream model fusing PET and CT and a three-stream transfer learning model (TS_TL) integrating clinical data were also developed. Image preprocessing includes semi-automatic segmentation, resampling, and image cropping. Considering the impact of class imbalance, the performance of the model was evaluated using ROC curves and AUC values. RESULTS TS_TL model demonstrated promising performance in predicting the EGFR mutation status, with an AUC of 0.883 (95%CI = 0.849-0.917) in the training set and 0.730 (95%CI = 0.629-0.830) in the independent test set. Particularly in advanced LADC, the model achieved an AUC of 0.871 (95%CI = 0.823-0.919) in the training set and 0.760 (95%CI = 0.638-0.881) in the test set. The model identified distinct activation areas in solid or subsolid lesions associated with wild and mutant types. Additionally, the patterns captured by the model were significantly altered by effective tyrosine kinase inhibitors treatment, leading to notable changes in predicted mutation probabilities. CONCLUSION PET/CT deep learning model can act as a tool for predicting EGFR mutation in LADC. Additionally, it offers clinicians insights for treatment decisions through evaluations both before and after treatment.
Collapse
Affiliation(s)
- Xiaonan Shao
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China.
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China.
| | - Xinyu Ge
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China
| | - Jianxiong Gao
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China
| | - Rong Niu
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China
| | - Yunmei Shi
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China
| | - Xiaoliang Shao
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China
| | - Zhenxing Jiang
- Department of Radiology, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
| | - Renyuan Li
- Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou, 310009, China
- Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, 310058, China
| | - Yuetao Wang
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China.
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China.
| |
Collapse
|
21
|
Adeoye J, Su YX. Leveraging artificial intelligence for perioperative cancer risk assessment of oral potentially malignant disorders. Int J Surg 2024; 110:1677-1686. [PMID: 38051932 PMCID: PMC10942172 DOI: 10.1097/js9.0000000000000979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Accepted: 11/21/2023] [Indexed: 12/07/2023]
Abstract
Oral potentially malignant disorders (OPMDs) are mucosal conditions with an inherent disposition to develop oral squamous cell carcinoma. Surgical management is the most preferred strategy to prevent malignant transformation in OPMDs, and surgical approaches to treatment include conventional scalpel excision, laser surgery, cryotherapy, and photodynamic therapy. However, in reality, since all patients with OPMDs will not develop oral squamous cell carcinoma in their lifetime, there is a need to stratify patients according to their risk of malignant transformation to streamline surgical intervention for patients with the highest risks. Artificial intelligence (AI) has the potential to integrate disparate factors influencing malignant transformation for robust, precise, and personalized cancer risk stratification of OPMD patients than current methods to determine the need for surgical resection, excision, or re-excision. Therefore, this article overviews existing AI models and tools, presents a clinical implementation pathway, and discusses necessary refinements to aid the clinical application of AI-based platforms for cancer risk stratification of OPMDs in surgical practice.
Collapse
Affiliation(s)
| | - Yu-Xiong Su
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, University of Hong Kong, Hong Kong SAR, People’s Republic of China
| |
Collapse
|
22
|
Vorwerk P, Kelleter J, Müller S, Krause U. Classification in Early Fire Detection Using Multi-Sensor Nodes-A Transfer Learning Approach. SENSORS (BASEL, SWITZERLAND) 2024; 24:1428. [PMID: 38474964 DOI: 10.3390/s24051428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 02/02/2024] [Accepted: 02/21/2024] [Indexed: 03/14/2024]
Abstract
Effective early fire detection is crucial for preventing damage to people and buildings, especially in fire-prone historic structures. However, due to the infrequent occurrence of fire events throughout a building's lifespan, real-world data for training models are often sparse. In this study, we applied feature representation transfer and instance transfer in the context of early fire detection using multi-sensor nodes. The goal was to investigate whether training data from a small-scale setup (source domain) can be used to identify various incipient fire scenarios in their early stages within a full-scale test room (target domain). In a first step, we employed Linear Discriminant Analysis (LDA) to create a new feature space solely based on the source domain data and predicted four different fire types (smoldering wood, smoldering cotton, smoldering cable and candle fire) in the target domain with a classification rate up to 69% and a Cohen's Kappa of 0.58. Notably, lower classification performance was observed for sensor node positions close to the wall in the full-scale test room. In a second experiment, we applied the TrAdaBoost algorithm as a common instance transfer technique to adapt the model to the target domain, assuming that sparse information from the target domain is available. Boosting the data from 1% to 30% was utilized for individual sensor node positions in the target domain to adapt the model to the target domain. We found that additional boosting improved the classification performance (average classification rate of 73% and an average Cohen's Kappa of 0.63). However, it was noted that excessively boosting the data could lead to overfitting to a specific sensor node position in the target domain, resulting in a reduction in the overall classification performance.
Collapse
Affiliation(s)
- Pascal Vorwerk
- Faculty of Process- and Systems Engineering, Institute of Apparatus and Environmental Technology, Otto von Guericke University of Magdeburg, Universitätsplatz 2, 39106 Magdeburg, Germany
| | - Jörg Kelleter
- GTE Industrieelektronik GmbH, Helmholtzstr. 21, 38-40, 41747 Viersen, Germany
| | - Steffen Müller
- GTE Industrieelektronik GmbH, Helmholtzstr. 21, 38-40, 41747 Viersen, Germany
| | - Ulrich Krause
- Faculty of Process- and Systems Engineering, Institute of Apparatus and Environmental Technology, Otto von Guericke University of Magdeburg, Universitätsplatz 2, 39106 Magdeburg, Germany
| |
Collapse
|
23
|
Ullah MS, Khan MA, Masood A, Mzoughi O, Saidani O, Alturki N. Brain tumor classification from MRI scans: a framework of hybrid deep learning model with Bayesian optimization and quantum theory-based marine predator algorithm. Front Oncol 2024; 14:1335740. [PMID: 38390266 PMCID: PMC10882068 DOI: 10.3389/fonc.2024.1335740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 01/12/2024] [Indexed: 02/24/2024] Open
Abstract
Brain tumor classification is one of the most difficult tasks for clinical diagnosis and treatment in medical image analysis. Any errors that occur throughout the brain tumor diagnosis process may result in a shorter human life span. Nevertheless, most currently used techniques ignore certain features that have particular significance and relevance to the classification problem in favor of extracting and choosing deep significance features. One important area of research is the deep learning-based categorization of brain tumors using brain magnetic resonance imaging (MRI). This paper proposes an automated deep learning model and an optimal information fusion framework for classifying brain tumor from MRI images. The dataset used in this work was imbalanced, a key challenge for training selected networks. This imbalance in the training dataset impacts the performance of deep learning models because it causes the classifier performance to become biased in favor of the majority class. We designed a sparse autoencoder network to generate new images that resolve the problem of imbalance. After that, two pretrained neural networks were modified and the hyperparameters were initialized using Bayesian optimization, which was later utilized for the training process. After that, deep features were extracted from the global average pooling layer. The extracted features contain few irrelevant information; therefore, we proposed an improved Quantum Theory-based Marine Predator Optimization algorithm (QTbMPA). The proposed QTbMPA selects both networks' best features and finally fuses using a serial-based approach. The fused feature set is passed to neural network classifiers for the final classification. The proposed framework tested on an augmented Figshare dataset and an improved accuracy of 99.80%, a sensitivity rate of 99.83%, a false negative rate of 17%, and a precision rate of 99.83% is obtained. Comparison and ablation study show the improvement in the accuracy of this work.
Collapse
Affiliation(s)
| | | | - Anum Masood
- Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| | - Olfa Mzoughi
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Oumaima Saidani
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| |
Collapse
|
24
|
Fan L, Gong X, Zheng C, Li J. Data pyramid structure for optimizing EUS-based GISTs diagnosis in multi-center analysis with missing label. Comput Biol Med 2024; 169:107897. [PMID: 38171262 DOI: 10.1016/j.compbiomed.2023.107897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 12/04/2023] [Accepted: 12/23/2023] [Indexed: 01/05/2024]
Abstract
This study introduces the Data Pyramid Structure (DPS) to address data sparsity and missing labels in medical image analysis. The DPS optimizes multi-task learning and enables sustainable expansion of multi-center data analysis. Specifically, It facilitates attribute prediction and malignant tumor diagnosis tasks by implementing a segmentation and aggregation strategy on data with absent attribute labels. To leverage multi-center data, we propose the Unified Ensemble Learning Framework (UELF) and the Unified Federated Learning Framework (UFLF), which incorporate strategies for data transfer and incremental learning in scenarios with missing labels. The proposed method was evaluated on a challenging EUS patient dataset from five centers, achieving promising diagnostic performance. The average accuracy was 0.984 with an AUC of 0.927 for multi-center analysis, surpassing state-of-the-art approaches. The interpretability of the predictions further highlights the potential clinical relevance of our method.
Collapse
Affiliation(s)
- Lin Fan
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan 611756, China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, China; National Engineering Laboratory of Integrated Transportation Big Data Application Technology, China
| | - Xun Gong
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan 611756, China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, China; National Engineering Laboratory of Integrated Transportation Big Data Application Technology, China.
| | - Cenyang Zheng
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan 611756, China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, China; National Engineering Laboratory of Integrated Transportation Big Data Application Technology, China
| | - Jiao Li
- Department of Gastroenterology, The Third People's Hospital of Chendu, Affiliated Hospital of Southwest Jiaotong University, Chengdu 610031, China
| |
Collapse
|
25
|
Klontzas ME, Vassalou EE, Spanakis K, Meurer F, Woertler K, Zibis A, Marias K, Karantanas AH. Deep learning enables the differentiation between early and late stages of hip avascular necrosis. Eur Radiol 2024; 34:1179-1186. [PMID: 37581656 PMCID: PMC10853078 DOI: 10.1007/s00330-023-10104-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 06/28/2023] [Accepted: 07/10/2023] [Indexed: 08/16/2023]
Abstract
OBJECTIVES To develop a deep learning methodology that distinguishes early from late stages of avascular necrosis of the hip (AVN) to determine treatment decisions. METHODS Three convolutional neural networks (CNNs) VGG-16, Inception ResnetV2, InceptionV3 were trained with transfer learning (ImageNet) and finetuned with a retrospectively collected cohort of (n = 104) MRI examinations of AVN patients, to differentiate between early (ARCO 1-2) and late (ARCO 3-4) stages. A consensus CNN ensemble decision was recorded as the agreement of at least two CNNs. CNN and ensemble performance was benchmarked on an independent cohort of 49 patients from another country and was compared to the performance of two MSK radiologists. CNN performance was expressed with areas under the curve (AUC), the respective 95% confidence intervals (CIs) and precision, and recall and f1-scores. AUCs were compared with DeLong's test. RESULTS On internal testing, Inception-ResnetV2 achieved the highest individual performance with an AUC of 99.7% (95%CI 99-100%), followed by InceptionV3 and VGG-16 with AUCs of 99.3% (95%CI 98.4-100%) and 97.3% (95%CI 95.5-99.2%) respectively. The CNN ensemble the same AUCs Inception ResnetV2. On external validation, model performance dropped with VGG-16 achieving the highest individual AUC of 78.9% (95%CI 51.6-79.6%) The best external performance was achieved by the model ensemble with an AUC of 85.5% (95%CI 72.2-93.9%). No significant difference was found between the CNN ensemble and expert MSK radiologists (p = 0.22 and 0.092 respectively). CONCLUSION An externally validated CNN ensemble accurately distinguishes between the early and late stages of AVN and has comparable performance to expert MSK radiologists. CLINICAL RELEVANCE STATEMENT This paper introduces the use of deep learning for the differentiation between early and late avascular necrosis of the hip, assisting in a complex clinical decision that can determine the choice between conservative and surgical treatment. KEY POINTS • A convolutional neural network ensemble achieved excellent performance in distinguishing between early and late avascular necrosis. • The performance of the deep learning method was similar to the performance of expert readers.
Collapse
Affiliation(s)
- Michail E Klontzas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece
- Department of Medical Imaging, University Hospital of Heraklion, 71110, Voutes, Crete, Greece
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece
- Advanced Hybrid Imaging Systems, Institute of Computer Science, Foundation for Research and Technology (FORTH), Nikolaou Plastira 100, 70013, Heraklion, Crete, Greece
| | - Evangelia E Vassalou
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece
| | - Konstantinos Spanakis
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece
| | - Felix Meurer
- Musculoskeletal Radiology Section, TUM School of Medicine, Technical University of Munich, Ismaninger Str 22, 81675, Munich, Germany
| | - Klaus Woertler
- Musculoskeletal Radiology Section, TUM School of Medicine, Technical University of Munich, Ismaninger Str 22, 81675, Munich, Germany
| | - Aristeidis Zibis
- Department of Anatomy, Medical School, University of Thessaly, Neofytou 9 St., 41223, Larissa, Greece
| | - Kostas Marias
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece
- Department of Electrical & Computer Engineering, Hellenic Mediterranean University, Heraklion, Crete, Greece
| | - Apostolos H Karantanas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece.
- Department of Medical Imaging, University Hospital of Heraklion, 71110, Voutes, Crete, Greece.
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece.
- Advanced Hybrid Imaging Systems, Institute of Computer Science, Foundation for Research and Technology (FORTH), Nikolaou Plastira 100, 70013, Heraklion, Crete, Greece.
| |
Collapse
|
26
|
Führer F, Gruber A, Diedam H, Göller AH, Menz S, Schneckener S. A deep neural network: mechanistic hybrid model to predict pharmacokinetics in rat. J Comput Aided Mol Des 2024; 38:7. [PMID: 38294570 DOI: 10.1007/s10822-023-00547-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 12/21/2023] [Indexed: 02/01/2024]
Abstract
An important aspect in the development of small molecules as drugs or agrochemicals is their systemic availability after intravenous and oral administration. The prediction of the systemic availability from the chemical structure of a potential candidate is highly desirable, as it allows to focus the drug or agrochemical development on compounds with a favorable kinetic profile. However, such predictions are challenging as the availability is the result of the complex interplay between molecular properties, biology and physiology and training data is rare. In this work we improve the hybrid model developed earlier (Schneckener in J Chem Inf Model 59:4893-4905, 2019). We reduce the median fold change error for the total oral exposure from 2.85 to 2.35 and for intravenous administration from 1.95 to 1.62. This is achieved by training on a larger data set, improving the neural network architecture as well as the parametrization of mechanistic model. Further, we extend our approach to predict additional endpoints and to handle different covariates, like sex and dosage form. In contrast to a pure machine learning model, our model is able to predict new end points on which it has not been trained. We demonstrate this feature by predicting the exposure over the first 24 h, while the model has only been trained on the total exposure.
Collapse
Affiliation(s)
- Florian Führer
- Engineering & Technology, Applied Mathematics, Bayer AG, 51368, Leverkusen, Germany.
| | - Andrea Gruber
- Pharmaceuticals, R&D, Preclinical Modeling & Simulation, Bayer AG, 13353, Berlin, Germany
| | - Holger Diedam
- Crop Science, Product Supply, SC Simulation & Analysis, Bayer AG, 40789, Monheim, Germany
| | - Andreas H Göller
- Pharmaceuticals, R&D, Molecular Design, Bayer AG, 42096, Wuppertal, Germany
| | - Stephan Menz
- Pharmaceuticals, R&D, Preclinical Modeling & Simulation, Bayer AG, 13353, Berlin, Germany
| | | |
Collapse
|
27
|
Falou O, Sannachi L, Haque M, Czarnota GJ, Kolios MC. Transfer learning of pre-treatment quantitative ultrasound multi-parametric images for the prediction of breast cancer response to neoadjuvant chemotherapy. Sci Rep 2024; 14:2340. [PMID: 38282158 PMCID: PMC10822849 DOI: 10.1038/s41598-024-52858-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 01/24/2024] [Indexed: 01/30/2024] Open
Abstract
Locally advanced breast cancer (LABC) is a severe type of cancer with a poor prognosis, despite advancements in therapy. As the disease is often inoperable, current guidelines suggest upfront aggressive neoadjuvant chemotherapy (NAC). Complete pathological response to chemotherapy is linked to improved survival, but conventional clinical assessments like physical exams, mammography, and imaging are limited in detecting early response. Early detection of tissue response can improve complete pathological response and patient survival while reducing exposure to ineffective and potentially harmful treatments. A rapid, cost-effective modality without the need for exogenous contrast agents would be valuable for evaluating neoadjuvant therapy response. Conventional ultrasound provides information about tissue echogenicity, but image comparisons are difficult due to instrument-dependent settings and imaging parameters. Quantitative ultrasound (QUS) overcomes this by using normalized power spectra to calculate quantitative metrics. This study used a novel transfer learning-based approach to predict LABC response to neoadjuvant chemotherapy using QUS imaging at pre-treatment. Using data from 174 patients, QUS parametric images of breast tumors with margins were generated. The ground truth response to therapy for each patient was based on standard clinical and pathological criteria. The Residual Network (ResNet) deep learning architecture was used to extract features from the parametric QUS maps. This was followed by SelectKBest and Synthetic Minority Oversampling (SMOTE) techniques for feature selection and data balancing, respectively. The Support Vector Machine (SVM) algorithm was employed to classify patients into two distinct categories: nonresponders (NR) and responders (RR). Evaluation results on an unseen test set demonstrate that the transfer learning-based approach using spectral slope parametric maps had the best performance in the identification of nonresponders with precision, recall, F1-score, and balanced accuracy of 100, 71, 83, and 86%, respectively. The transfer learning-based approach has many advantages over conventional deep learning methods since it reduces the need for large image datasets for training and shortens the training time. The results of this study demonstrate the potential of transfer learning in predicting LABC response to neoadjuvant chemotherapy before the start of treatment using quantitative ultrasound imaging. Prediction of NAC response before treatment can aid clinicians in customizing ineffectual treatment regimens for individual patients.
Collapse
Affiliation(s)
- Omar Falou
- Department of Physics, Toronto Metropolitan University, Toronto, ON, Canada.
- Institute for Biomedical Engineering, Science and Technology (iBEST), Keenan Research Centre for Biomedical Science, St. Michael's Hospital, Toronto, ON, Canada.
| | - Lakshmanan Sannachi
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
- Physical Sciences, Sunnybrook Research Institute, Toronto, ON, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Maeashah Haque
- Department of Physics, Toronto Metropolitan University, Toronto, ON, Canada
| | - Gregory J Czarnota
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
- Physical Sciences, Sunnybrook Research Institute, Toronto, ON, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Michael C Kolios
- Department of Physics, Toronto Metropolitan University, Toronto, ON, Canada
- Institute for Biomedical Engineering, Science and Technology (iBEST), Keenan Research Centre for Biomedical Science, St. Michael's Hospital, Toronto, ON, Canada
| |
Collapse
|
28
|
Malik M, Chong B, Fernandez J, Shim V, Kasabov NK, Wang A. Stroke Lesion Segmentation and Deep Learning: A Comprehensive Review. Bioengineering (Basel) 2024; 11:86. [PMID: 38247963 PMCID: PMC10813717 DOI: 10.3390/bioengineering11010086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 01/05/2024] [Accepted: 01/15/2024] [Indexed: 01/23/2024] Open
Abstract
Stroke is a medical condition that affects around 15 million people annually. Patients and their families can face severe financial and emotional challenges as it can cause motor, speech, cognitive, and emotional impairments. Stroke lesion segmentation identifies the stroke lesion visually while providing useful anatomical information. Though different computer-aided software are available for manual segmentation, state-of-the-art deep learning makes the job much easier. This review paper explores the different deep-learning-based lesion segmentation models and the impact of different pre-processing techniques on their performance. It aims to provide a comprehensive overview of the state-of-the-art models and aims to guide future research and contribute to the development of more robust and effective stroke lesion segmentation models.
Collapse
Affiliation(s)
- Mishaim Malik
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
| | - Benjamin Chong
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Faculty of Medical and Health Sciences, The University of Auckland, Auckland 1010, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1010, New Zealand
| | - Justin Fernandez
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Centre for Brain Research, The University of Auckland, Auckland 1010, New Zealand
- Mātai Medical Research Institute, Gisborne 4010, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Mātai Medical Research Institute, Gisborne 4010, New Zealand
| | - Nikola Kirilov Kasabov
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Knowledge Engineering and Discovery Research Innovation, School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand
- Institute for Information and Communication Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
- Knowledge Engineering Consulting Ltd., Auckland 1071, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Faculty of Medical and Health Sciences, The University of Auckland, Auckland 1010, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1010, New Zealand
- Mātai Medical Research Institute, Gisborne 4010, New Zealand
- Medical Imaging Research Centre, The University of Auckland, Auckland 1010, New Zealand
- Centre for Co-Created Ageing Research, The University of Auckland, Auckland 1010, New Zealand
| |
Collapse
|
29
|
Sadeghi A, Sadeghi M, Sharifpour A, Fakhar M, Zakariaei Z, Sadeghi M, Rokni M, Zakariaei A, Banimostafavi ES, Hajati F. Potential diagnostic application of a novel deep learning- based approach for COVID-19. Sci Rep 2024; 14:280. [PMID: 38167985 PMCID: PMC10762017 DOI: 10.1038/s41598-023-50742-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 12/24/2023] [Indexed: 01/05/2024] Open
Abstract
COVID-19 is a highly communicable respiratory illness caused by the novel coronavirus SARS-CoV-2, which has had a significant impact on global public health and the economy. Detecting COVID-19 patients during a pandemic with limited medical facilities can be challenging, resulting in errors and further complications. Therefore, this study aims to develop deep learning models to facilitate automated diagnosis of COVID-19 from CT scan records of patients. The study also introduced COVID-MAH-CT, a new dataset that contains 4442 CT scan images from 133 COVID-19 patients, as well as 133 CT scan 3D volumes. We proposed and evaluated six different transfer learning models for slide-level analysis that are responsible for detecting COVID-19 in multi-slice spiral CT. Additionally, multi-head attention squeeze and excitation residual (MASERes) neural network, a novel 3D deep model was developed for patient-level analysis, which analyzes all the CT slides of a given patient as a whole and can accurately diagnose COVID-19. The codes and dataset developed in this study are available at https://github.com/alrzsdgh/COVID . The proposed transfer learning models for slide-level analysis were able to detect COVID-19 CT slides with an accuracy of more than 99%, while MASERes was able to detect COVID-19 patients from 3D CT volumes with an accuracy of 100%. These achievements demonstrate that the proposed models in this study can be useful for automatically detecting COVID-19 in both slide-level and patient-level from patients' CT scan records, and can be applied for real-world utilization, particularly in diagnosing COVID-19 cases in areas with limited medical facilities.
Collapse
Affiliation(s)
- Alireza Sadeghi
- Intelligent Mobile Robot Lab (IMRL), Department of Mechatronics Engineering, Faculty of New Sciences and Technologies, University of Tehran, Tehran, Iran
| | - Mahdieh Sadeghi
- Student Research Committee, Mazandaran University of Medical Sciences, Sari, Iran
| | - Ali Sharifpour
- Pulmonary and Critical Care Division, Imam Khomeini Hospital, Mazandaran University of Medical Sciences, Sari, Iran
| | - Mahdi Fakhar
- Iranian National Registry Center for Lophomoniasis and Toxoplasmosis, Imam Khomeini Hospital, Mazandaran University of Medical Sciences, P.O Box: 48166-33131, Sari, Iran.
| | - Zakaria Zakariaei
- Toxicology and Forensic Medicine Division, Mazandaran Registry Center for Opioids Poisoning, Anti-microbial Resistance Research Center, Imam Khomeini Hospital, Mazandaran University of Medical Sciences, P.O box: 48166-33131, Sari, Iran.
| | - Mohammadreza Sadeghi
- Student Research Committee, Mazandaran University of Medical Sciences, Sari, Iran
| | - Mojtaba Rokni
- Department of Radiology, Qaemshahr Razi Hospital, Mazandaran University of Medical Sciences, Sari, Iran
| | - Atousa Zakariaei
- MSC in Civil Engineering, European University of Lefke, Nicosia, Cyprus
| | - Elham Sadat Banimostafavi
- Department of Radiology, Imam Khomeini Hospital, Mazandaran University of Medical Sciences, Sari, Iran
| | - Farshid Hajati
- Intelligent Technology Innovation Lab (ITIL) Group, Institute for Sustainable Industries and Liveable Cities, Victoria University, Footscray, Australia
| |
Collapse
|
30
|
Su X, Liu W, Jiang S, Gao X, Chu Y, Ma L. Deep learning-based anatomical position recognition for gastroscopic examination. Technol Health Care 2024; 32:39-48. [PMID: 38669495 PMCID: PMC11191429 DOI: 10.3233/thc-248004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/28/2024]
Abstract
BACKGROUND The gastroscopic examination is a preferred method for the detection of upper gastrointestinal lesions. However, gastroscopic examination has high requirements for doctors, especially for the strict position and quantity of the archived images. These requirements are challenging for the education and training of junior doctors. OBJECTIVE The purpose of this study is to use deep learning to develop automatic position recognition technology for gastroscopic examination. METHODS A total of 17182 gastroscopic images in eight anatomical position categories are collected. Convolutional neural network model MogaNet is used to identify all the anatomical positions of the stomach for gastroscopic examination The performance of four models is evaluated by sensitivity, precision, and F1 score. RESULTS The average sensitivity of the method proposed is 0.963, which is 0.074, 0.066 and 0.065 higher than ResNet, GoogleNet and SqueezeNet, respectively. The average precision of the method proposed is 0.964, which is 0.072, 0.067 and 0.068 higher than ResNet, GoogleNet, and SqueezeNet, respectively. And the average F1-Score of the method proposed is 0.964, which is 0.074, 0.067 and 0.067 higher than ResNet, GoogleNet, and SqueezeNet, respectively. The results of the t-test show that the method proposed is significantly different from other methods (p< 0.05). CONCLUSION The method proposed exhibits the best performance for anatomical positions recognition. And the method proposed can help junior doctors meet the requirements of completeness of gastroscopic examination and the number and position of archived images quickly.
Collapse
Affiliation(s)
- Xiufeng Su
- Weihai Municipal Hospital, Cheeloo College of Medicine, Shandong University, Weihai, Shandong, China
| | - Weiyu Liu
- Weihai Municipal Hospital, Cheeloo College of Medicine, Shandong University, Weihai, Shandong, China
| | - Suyi Jiang
- Weihai Municipal Hospital, Cheeloo College of Medicine, Shandong University, Weihai, Shandong, China
| | - Xiaozhong Gao
- Weihai Municipal Hospital, Cheeloo College of Medicine, Shandong University, Weihai, Shandong, China
| | - Yanliu Chu
- Weihai Municipal Hospital, Cheeloo College of Medicine, Shandong University, Weihai, Shandong, China
| | - Liyong Ma
- School of Information Science and Engineering, Harbin Institute of Technology, Weihai, Shandong, China
| |
Collapse
|
31
|
Tao S, Tian Z, Bai L, Xu Y, Kuang C, Liu X. Phase retrieval for X-ray differential phase contrast radiography with knowledge transfer learning from virtual differential absorption model. Comput Biol Med 2024; 168:107711. [PMID: 37995534 DOI: 10.1016/j.compbiomed.2023.107711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 10/31/2023] [Accepted: 11/15/2023] [Indexed: 11/25/2023]
Abstract
Grating-based X-ray phase contrast radiography and computed tomography (CT) are promising modalities for future medical applications. However, the ill-posed phase retrieval problem in X-ray phase contrast imaging has hindered its use for quantitative analysis in biomedical imaging. Deep learning has been proved as an effective tool for image retrieval. However, in practical grating-based X-ray phase contrast imaging system, acquiring the ground truth of phase to form image pairs is challenging, which poses a great obstacle for using deep leaning methods. Transfer learning is widely used to address the problem with knowledge inheritance from similar tasks. In the present research, we propose a virtual differential absorption model and generate a training dataset with differential absorption images and absorption images. The knowledge learned from the training is transferred to phase retrieval with transfer learning techniques. Numerical simulations and experiments both demonstrate its feasibility. Image quality of retrieved phase radiograph and phase CT slices is improved when compared with representative phase retrieval methods. We conclude that this method is helpful in both X-ray 2D and 3D imaging and may find its applications in X-ray phase contrast radiography and X-ray phase CT.
Collapse
Affiliation(s)
- Siwei Tao
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Zonghan Tian
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Ling Bai
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Yueshu Xu
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China; State Key Laboratory of Extreme Photonics and Instrumentation, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou, 315100, China
| | - Cuifang Kuang
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China; State Key Laboratory of Extreme Photonics and Instrumentation, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou, 315100, China; Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, 030006, China.
| | - Xu Liu
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China; State Key Laboratory of Extreme Photonics and Instrumentation, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou, 315100, China; Ningbo Research Institute, Zhejiang University, Ningbo, 315100, China.
| |
Collapse
|
32
|
Zeng W, Xiao ZY. Few-shot learning based on deep learning: A survey. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:679-711. [PMID: 38303439 DOI: 10.3934/mbe.2024029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2024]
Abstract
In recent years, with the development of science and technology, powerful computing devices have been constantly developing. As an important foundation, deep learning (DL) technology has achieved many successes in multiple fields. In addition, the success of deep learning also relies on the support of large-scale datasets, which can provide models with a variety of images. The rich information in these images can help the model learn more about various categories of images, thereby improving the classification performance and generalization ability of the model. However, in real application scenarios, it may be difficult for most tasks to collect a large number of images or enough images for model training, which also restricts the performance of the trained model to a certain extent. Therefore, how to use limited samples to train the model with high performance becomes key. In order to improve this problem, the few-shot learning (FSL) strategy is proposed, which aims to obtain a model with strong performance through a small amount of data. Therefore, FSL can play its advantages in some real scene tasks where a large number of training data cannot be obtained. In this review, we will mainly introduce the FSL methods for image classification based on DL, which are mainly divided into four categories: methods based on data enhancement, metric learning, meta-learning and adding other tasks. First, we introduce some classic and advanced FSL methods in the order of categories. Second, we introduce some datasets that are often used to test the performance of FSL methods and the performance of some classical and advanced FSL methods on two common datasets. Finally, we discuss the current challenges and future prospects in this field.
Collapse
Affiliation(s)
- Wu Zeng
- Engineering Training Center, Putian University, Putian 351100, China
| | - Zheng-Ying Xiao
- Engineering Training Center, Putian University, Putian 351100, China
| |
Collapse
|
33
|
Kahaki S, Hagemann IS, Cha KH, Trindade C, Petrick N, Kostelecky N, Borden LE, Atwi D, Fung KM, Chen W. End-to-end deep learning method for predicting hormonal treatment response in women with atypical endometrial hyperplasia or endometrial cancer. J Med Imaging (Bellingham) 2024; 11:017502. [PMID: 38370423 PMCID: PMC10868592 DOI: 10.1117/1.jmi.11.1.017502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 12/17/2023] [Accepted: 01/16/2024] [Indexed: 02/20/2024] Open
Abstract
Purpose Endometrial cancer (EC) is the most common gynecologic malignancy in the United States, and atypical endometrial hyperplasia (AEH) is considered a high-risk precursor to EC. Hormone therapies and hysterectomy are practical treatment options for AEH and early-stage EC. Some patients prefer hormone therapies for reasons such as fertility preservation or being poor surgical candidates. However, accurate prediction of an individual patient's response to hormonal treatment would allow for personalized and potentially improved recommendations for these conditions. This study aims to explore the feasibility of using deep learning models on whole slide images (WSI) of endometrial tissue samples to predict the patient's response to hormonal treatment. Approach We curated a clinical WSI dataset of 112 patients from two clinical sites. An expert pathologist annotated these images by outlining AEH/EC regions. We developed an end-to-end machine learning model with mixed supervision. The model is based on image patches extracted from pathologist-annotated AEH/EC regions. Either an unsupervised deep learning architecture (Autoencoder or ResNet50), or non-deep learning (radiomics feature extraction) is used to embed the images into a low-dimensional space, followed by fully connected layers for binary prediction, which was trained with binary responder/non-responder labels established by pathologists. We used stratified sampling to partition the dataset into a development set and a test set for internal validation of the performance of our models. Results The autoencoder model yielded an AUROC of 0.80 with 95% CI [0.63, 0.95] on the independent test set for the task of predicting a patient with AEH/EC as a responder vs non-responder to hormonal treatment. Conclusions These findings demonstrate the potential of using mixed supervised machine learning models on WSIs for predicting the response to hormonal treatment in AEH/EC patients.
Collapse
Affiliation(s)
- Seyed Kahaki
- U.S. Food and Drug Administration (FDA), Center for Devices and Radiological Health, Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, Silver Spring, Maryland, United States
| | - Ian S. Hagemann
- Washington University School of Medicine, Department of Pathology and Immunology, St. Louis, Missouri, United States
- Washington University School of Medicine, Department of Obstetrics and Gynecology, St. Louis, Missouri, United States
| | - Kenny H. Cha
- U.S. Food and Drug Administration (FDA), Center for Devices and Radiological Health, Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, Silver Spring, Maryland, United States
| | - Christopher Trindade
- U.S. Food and Drug Administration (FDA), Division of Molecular Genetics and Pathology, Silver Spring, Maryland, United States
| | - Nicholas Petrick
- U.S. Food and Drug Administration (FDA), Center for Devices and Radiological Health, Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, Silver Spring, Maryland, United States
| | - Nicolas Kostelecky
- Washington University School of Medicine, Department of Pathology and Immunology, St. Louis, Missouri, United States
- Northwestern University Feinberg School of Medicine, Department of Pathology, Chicago, Illinois, United States
| | - Lindsay E. Borden
- University of Oklahoma Health Sciences Center, Department of Obstetrics and Gynecology, Oklahoma City, Oklahoma, United States
- University of Oklahoma Health Sciences Center, Department of Pathology, Oklahoma City, Oklahoma, United States
| | - Doaa Atwi
- University of Oklahoma Health Sciences Center, Department of Pathology, Oklahoma City, Oklahoma, United States
| | - Kar-Ming Fung
- University of Oklahoma Health Sciences Center, Department of Pathology, Oklahoma City, Oklahoma, United States
- University of Oklahoma Health Sciences Center, Stephenson Cancer Center, Oklahoma City, Oklahoma, United States
| | - Weijie Chen
- U.S. Food and Drug Administration (FDA), Center for Devices and Radiological Health, Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, Silver Spring, Maryland, United States
| |
Collapse
|
34
|
Lee GP, Kim YJ, Park DK, Kim YJ, Han SK, Kim KG. Gastro-BaseNet: A Specialized Pre-Trained Model for Enhanced Gastroscopic Data Classification and Diagnosis of Gastric Cancer and Ulcer. Diagnostics (Basel) 2023; 14:75. [PMID: 38201385 PMCID: PMC10795822 DOI: 10.3390/diagnostics14010075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 12/25/2023] [Accepted: 12/26/2023] [Indexed: 01/12/2024] Open
Abstract
Most of the development of gastric disease prediction models has utilized pre-trained models from natural data, such as ImageNet, which lack knowledge of medical domains. This study proposes Gastro-BaseNet, a classification model trained using gastroscopic image data for abnormal gastric lesions. To prove performance, we compared transfer-learning based on two pre-trained models (Gastro-BaseNet and ImageNet) and two training methods (freeze and fine-tune modes). The effectiveness was verified in terms of classification at the image-level and patient-level, as well as the localization performance of lesions. The development of Gastro-BaseNet had demonstrated superior transfer learning performance compared to random weight settings in ImageNet. When developing a model for predicting the diagnosis of gastric cancer and gastric ulcers, the transfer-learned model based on Gastro-BaseNet outperformed that based on ImageNet. Furthermore, the model's performance was highest when fine-tuning the entire layer in the fine-tune mode. Additionally, the trained model was based on Gastro-BaseNet, which showed higher localization performance, which confirmed its accurate detection and classification of lesions in specific locations. This study represents a notable advancement in the development of image analysis models within the medical field, resulting in improved diagnostic predictive accuracy and aiding in making more informed clinical decisions in gastrointestinal endoscopy.
Collapse
Affiliation(s)
- Gi Pyo Lee
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology (GAIHST), Gachon University, Incheon 21565, Republic of Korea;
| | - Young Jae Kim
- Department of Biomedical Engineering, Gachon University Gil Medical Center, College of Medicine, Gachon University, Incheon 21565, Republic of Korea;
| | - Dong Kyun Park
- Division of Gastroenterology, Department of Internal Medicine, Gachon University Gil Medical Center, College of Medicine, Gachon University, Incheon 21565, Republic of Korea; (D.K.P.); (Y.J.K.)
| | - Yoon Jae Kim
- Division of Gastroenterology, Department of Internal Medicine, Gachon University Gil Medical Center, College of Medicine, Gachon University, Incheon 21565, Republic of Korea; (D.K.P.); (Y.J.K.)
| | - Su Kyeong Han
- Health IT Research Center, Gachon University Gil Medical Center, Incheon 21565, Republic of Korea;
| | - Kwang Gi Kim
- Department of Biomedical Engineering, Gachon University Gil Medical Center, College of Medicine, Gachon University, Incheon 21565, Republic of Korea;
| |
Collapse
|
35
|
Wibawa MS, Zhou JY, Wang R, Huang YY, Zhan Z, Chen X, Lv X, Young LS, Rajpoot N. AI-Based Risk Score from Tumour-Infiltrating Lymphocyte Predicts Locoregional-Free Survival in Nasopharyngeal Carcinoma. Cancers (Basel) 2023; 15:5789. [PMID: 38136336 PMCID: PMC10742296 DOI: 10.3390/cancers15245789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 11/28/2023] [Accepted: 12/08/2023] [Indexed: 12/24/2023] Open
Abstract
BACKGROUND Locoregional recurrence of nasopharyngeal carcinoma (NPC) occurs in 10% to 50% of cases following primary treatment. However, the current main prognostic markers for NPC, both stage and plasma Epstein-Barr virus DNA, are not sensitive to locoregional recurrence. METHODS We gathered 385 whole-slide images (WSIs) from haematoxylin and eosin (H&E)-stained NPC sections (n = 367 cases), which were collected from Sun Yat-sen University Cancer Centre. We developed a deep learning algorithm to detect tumour nuclei and lymphocyte nuclei in WSIs, followed by density-based clustering to quantify the tumour-infiltrating lymphocytes (TILs) into 12 scores. The Random Survival Forest model was then trained on the TILs to generate risk score. RESULTS Based on Kaplan-Meier analysis, the proposed methods were able to stratify low- and high-risk NPC cases in a validation set of locoregional recurrence with a statically significant result (p < 0.001). This finding was also found in distant metastasis-free survival (p < 0.001), progression-free survival (p < 0.001), and regional recurrence-free survival (p < 0.05). Furthermore, in both univariate analysis (HR: 1.58, CI: 1.13-2.19, p < 0.05) and multivariate analysis (HR:1.59, CI: 1.11-2.28, p < 0.05), we also found that our methods demonstrated a strong prognostic value for locoregional recurrence. CONCLUSION The proposed novel digital markers could potentially be utilised to assist treatment decisions in cases of NPC.
Collapse
Affiliation(s)
- Made Satria Wibawa
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; (M.S.W.); (R.W.)
| | - Jia-Yu Zhou
- State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China; (J.-Y.Z.); (Y.-Y.H.); (Z.Z.); (X.C.); (X.L.)
- Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Ruoyu Wang
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; (M.S.W.); (R.W.)
| | - Ying-Ying Huang
- State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China; (J.-Y.Z.); (Y.-Y.H.); (Z.Z.); (X.C.); (X.L.)
- Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Zejiang Zhan
- State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China; (J.-Y.Z.); (Y.-Y.H.); (Z.Z.); (X.C.); (X.L.)
- Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Xi Chen
- State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China; (J.-Y.Z.); (Y.-Y.H.); (Z.Z.); (X.C.); (X.L.)
- Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Xing Lv
- State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China; (J.-Y.Z.); (Y.-Y.H.); (Z.Z.); (X.C.); (X.L.)
- Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Lawrence S. Young
- Warwick Medical School, University of Warwick, Coventry CV4 7AL, UK;
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; (M.S.W.); (R.W.)
- The Alan Turing Institute, London NW1 2DB, UK
| |
Collapse
|
36
|
Minotti M, Negrini S, Cina A, Galbusera F, Zaina F, Bassani T. Deep learning prediction of curve severity from rasterstereographic back images in adolescent idiopathic scoliosis. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2023:10.1007/s00586-023-08052-1. [PMID: 38055037 DOI: 10.1007/s00586-023-08052-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 10/18/2023] [Accepted: 11/13/2023] [Indexed: 12/07/2023]
Abstract
PURPOSE Radiation-free systems based on dorsal surface topography can potentially represent an alternative to radiographic examination for early screening of scoliosis, based on the ability of recognizing the presence of deformity or classifying its severity. This study aims to assess the effectiveness of a deep learning model based on convolutional neural networks in directly predicting the Cobb angle from rasterstereographic images of the back surface in subjects with adolescent idiopathic scoliosis. METHODS Two datasets, comprising a total of 900 individuals, were utilized for model training (720 samples) and testing (180). Rasterstereographic scans were performed using the Formetric4D device. The true Cobb angle was obtained from radiographic examination. The best model configuration was identified by comparing different network architectures and hyperparameters through cross-validation in the training set. The performance of the developed model in predicting the Cobb angle was assessed on the test set. The accuracy in classifying scoliosis severity (non-scoliotic, mild, and moderate category) based on Cobb angle was evaluated as well. RESULTS The mean absolute error in predicting the Cobb angle was 6.1° ± 5.0°. Moderate correlation (r = 0.68) and a root-mean-square error of 8° between the predicted and true values was reported. The overall accuracy in classifying scoliosis severity was 59%. CONCLUSION Despite some improvement over previous approaches that relied on spine shape reconstruction, the performance of the present fully automatic application is below that of radiographic evaluation performed by human operators. The study confirms that rasterstereography cannot be considered a valid non-invasive alternative to radiographic examination for clinical purposes.
Collapse
Affiliation(s)
| | - Stefano Negrini
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
- Department of Biomedical, Surgical and Dental Sciences, University "La Statale", 20122, Milan, Italy
| | - Andrea Cina
- Spine Center, Schulthess Clinic, Zurich, Switzerland
- Biomedical Data Science Lab, Department of Health Sciences and Technologies, ETH Zurich, Zurich, Switzerland
| | | | - Fabio Zaina
- ISICO (Italian Scientific Spine Institute), Milan, Italy
| | - Tito Bassani
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy.
| |
Collapse
|
37
|
Moses O, Qureshi M, King ICC. Letter to the Editor regarding 'Development of a deep learning-based tool to assist wound classification'. J Plast Reconstr Aesthet Surg 2023; 87:215-216. [PMID: 37913620 DOI: 10.1016/j.bjps.2023.10.089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 10/07/2023] [Indexed: 11/03/2023]
Affiliation(s)
- Onyedi Moses
- Brighton and Sussex Medical School, Falmer, Brighton BN1 9RY, United Kingdom.
| | - Mehreen Qureshi
- Department of Plastic Surgery, Royal Sussex County Hospital, Brighton BN2 5BE, United Kingdom; Department of Plastic Surgery, Queen Victoria Hospital, East Grinstead RH19 3DZ, United Kingdom
| | - Ian C C King
- Brighton and Sussex Medical School, Falmer, Brighton BN1 9RY, United Kingdom; Department of Plastic Surgery, Royal Sussex County Hospital, Brighton BN2 5BE, United Kingdom; Department of Plastic Surgery, Queen Victoria Hospital, East Grinstead RH19 3DZ, United Kingdom
| |
Collapse
|
38
|
Soulier T, Colliot O, Ayache N, Rohaut B. How will tomorrow's algorithms fuse multimodal data? The example of the neuroprognosis in Intensive Care. Anaesth Crit Care Pain Med 2023; 42:101301. [PMID: 37709200 DOI: 10.1016/j.accpm.2023.101301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 09/03/2023] [Indexed: 09/16/2023]
Affiliation(s)
- Théodore Soulier
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, F-75013, Paris, France.
| | - Olivier Colliot
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, F-75013, Paris, France
| | | | - Benjamin Rohaut
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, F-75013, Paris, France; Department of Neurology, Groupe Hospitalier Pitié-Salpêtrière, AP-HP, Paris, France
| |
Collapse
|
39
|
Mudeng V, Farid MN, Ayana G, Choe SW. Domain and Histopathology Adaptations-Based Classification for Malignancy Grading System. THE AMERICAN JOURNAL OF PATHOLOGY 2023; 193:2080-2098. [PMID: 37673327 DOI: 10.1016/j.ajpath.2023.07.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 06/30/2023] [Accepted: 07/19/2023] [Indexed: 09/08/2023]
Abstract
Accurate proliferation rate quantification can be used to devise an appropriate treatment for breast cancer. Pathologists use breast tissue biopsy glass slides stained with hematoxylin and eosin to obtain grading information. However, this manual evaluation may lead to high costs and be ineffective because diagnosis depends on the facility and the pathologists' insights and experiences. Convolutional neural network acts as a computer-based observer to improve clinicians' capacity in grading breast cancer. Therefore, this study proposes a novel scheme for automatic breast cancer malignancy grading from invasive ductal carcinoma. The proposed classifiers implement multistage transfer learning incorporating domain and histopathologic transformations. Domain adaptation using pretrained models, such as InceptionResNetV2, InceptionV3, NASNet-Large, ResNet50, ResNet101, VGG19, and Xception, was applied to classify the ×40 magnification BreaKHis data set into eight classes. Subsequently, InceptionV3 and Xception, which contain the domain and histopathology pretrained weights, were determined to be the best for this study and used to categorize the Databiox database into grades 1, 2, or 3. To provide a comprehensive report, this study offered a patchless automated grading system for magnification-dependent and magnification-independent classifications. With an overall accuracy (means ± SD) of 90.17% ± 3.08% to 97.67% ± 1.09% and an F1 score of 0.9013 to 0.9760 for magnification-dependent classification, the classifiers in this work achieved outstanding performance. The proposed approach could be used for breast cancer grading systems in clinical settings.
Collapse
Affiliation(s)
- Vicky Mudeng
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; Department of Electrical Engineering, Institut Teknologi Kalimantan, Balikpapan, Indonesia
| | - Mifta Nur Farid
- Department of Electrical Engineering, Institut Teknologi Kalimantan, Balikpapan, Indonesia
| | - Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea
| | - Se-Woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea.
| |
Collapse
|
40
|
Jaiswal M, Sharma M, Khandnor P, Goyal A, Belokar R, Harit S, Sood T, Goyal K, Dua P. Deep Learning Models for Classification of Deciduous and Permanent Teeth From Digital Panoramic Images. Cureus 2023; 15:e49937. [PMID: 38179345 PMCID: PMC10765069 DOI: 10.7759/cureus.49937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/04/2023] [Indexed: 01/06/2024] Open
Abstract
INTRODUCTION Dental radiographs are essential in the diagnostic process in dentistry. They serve various purposes, including determining age, analyzing patterns of tooth eruption/shedding, and treatment planning and prognosis. The emergence of digital radiography technology has piqued interest in using artificial intelligence systems to assist and guide dental professionals. These cutting-edge technologies assist in streamlining decision-making processes by enabling entity classification and localization tasks. With the integration of artificial Intelligence algorithms tailored for pediatric dentistry applications and utilizing automated tools, there is an optimistic outlook on improving diagnostic capabilities while reducing stress and fatigue among clinicians. METHODOLOGY The dataset comprised 620 images (mixed dentition: 314, permanent dentition: 306). Panoramic radiographs taken were within the age range of 4-16 years. The classification of deciduous and permanent teeth involved training CNN-based models using different architectures such as Resnet, AlexNet, and EfficientNet, among others. A ratio of 70:15:15 was utilized for training, validation, and testing, respectively. RESULT AND CONCLUSION The findings indicated that among the models proposed, EfficientNetB0 and EfficientNetB3 exhibited superior performance. Both EfficientNetB0 and EfficientNetB3 achieved an accuracy rate, precision, recall, and F1 scores of 98% in classifying teeth as either deciduous or permanent. This implies that these models were highly accurate in identifying patterns/features within the dataset used for evaluation.
Collapse
Affiliation(s)
- Manoj Jaiswal
- Pedodontics and Preventive Dentistry, Postgraduate Institute of Medical Education and Research, Chandigarh, IND
| | - Megha Sharma
- Computer Science and Engineering, Punjab Engineering College, Chandigarh, IND
| | - Padmavati Khandnor
- Computer Science and Engineering, Punjab Engineering College, Chandigarh, IND
| | - Ashima Goyal
- Pedodontics and Preventive Dentistry, Postgraduate Institute of Medical Education and Research, Chandigarh, IND
| | - Rajendra Belokar
- Production and Industrial Engineering, Punjab Engineering College, Chandigarh, IND
| | - Sandeep Harit
- Computer Science and Engineering, Punjab Engineering College, Chandigarh, IND
| | - Tamanna Sood
- Computer Science and Engineering, Punjab Engineering College, Chandigarh, IND
| | - Kanav Goyal
- Mechanical Engineering, Punjab Engineering College, Chandigarh, IND
| | - Pallavi Dua
- Computer Science and Engineering, Punjab Engineering College, Chandigarh, IND
| |
Collapse
|
41
|
Keles E, Bagci U. The past, current, and future of neonatal intensive care units with artificial intelligence: a systematic review. NPJ Digit Med 2023; 6:220. [PMID: 38012349 PMCID: PMC10682088 DOI: 10.1038/s41746-023-00941-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 10/05/2023] [Indexed: 11/29/2023] Open
Abstract
Machine learning and deep learning are two subsets of artificial intelligence that involve teaching computers to learn and make decisions from any sort of data. Most recent developments in artificial intelligence are coming from deep learning, which has proven revolutionary in almost all fields, from computer vision to health sciences. The effects of deep learning in medicine have changed the conventional ways of clinical application significantly. Although some sub-fields of medicine, such as pediatrics, have been relatively slow in receiving the critical benefits of deep learning, related research in pediatrics has started to accumulate to a significant level, too. Hence, in this paper, we review recently developed machine learning and deep learning-based solutions for neonatology applications. We systematically evaluate the roles of both classical machine learning and deep learning in neonatology applications, define the methodologies, including algorithmic developments, and describe the remaining challenges in the assessment of neonatal diseases by using PRISMA 2020 guidelines. To date, the primary areas of focus in neonatology regarding AI applications have included survival analysis, neuroimaging, analysis of vital parameters and biosignals, and retinopathy of prematurity diagnosis. We have categorically summarized 106 research articles from 1996 to 2022 and discussed their pros and cons, respectively. In this systematic review, we aimed to further enhance the comprehensiveness of the study. We also discuss possible directions for new AI models and the future of neonatology with the rising power of AI, suggesting roadmaps for the integration of AI into neonatal intensive care units.
Collapse
Affiliation(s)
- Elif Keles
- Northwestern University, Feinberg School of Medicine, Department of Radiology, Chicago, IL, USA.
| | - Ulas Bagci
- Northwestern University, Feinberg School of Medicine, Department of Radiology, Chicago, IL, USA
- Northwestern University, Department of Biomedical Engineering, Chicago, IL, USA
- Department of Electrical and Computer Engineering, Chicago, IL, USA
| |
Collapse
|
42
|
Gao R, Luo G, Ding R, Yang B, Sun H. A Lightweight Deep Learning Framework for Automatic MRI Data Sorting and Artifacts Detection. J Med Syst 2023; 47:124. [PMID: 37999807 DOI: 10.1007/s10916-023-02017-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 11/05/2023] [Indexed: 11/25/2023]
Abstract
The purpose of this study is to develop a lightweight and easily deployable deep learning system for fully automated content-based brain MRI sorting and artifacts detection. 22092 MRI volumes from 4076 patients between 2017 and 2021 were involved in this retrospective study. The dataset mainly contains 4 common contrast (T1-weighted (T1w), contrast-enhanced T1-weighted (T1c), T2-weighted (T2w), fluid-attenuated inversion recovery (FLAIR)) in three perspectives (axial, coronal, and sagittal), and magnetic resonance angiography (MRA), as well as three typical artifacts (motion, aliasing, and metal artifacts). In the proposed architecture, a pre-trained EfficientNetB0 with the fully connected layers removed was used as the feature extractor and a multilayer perceptron (MLP) module with four hidden layers was used as the classifier. Precision, recall, F1_Score, accuracy, the number of trainable parameters, and float-point of operations (FLOPs) were calculated to evaluate the performance of the proposed model. The proposed model was also compared with four other existing CNN-based models in terms of classification performance and model size. The overall precision, recall, F1_Score, and accuracy of the proposed model were 0.983, 0.926, 0.950, and 0.991, respectively. The performance of the proposed model was outperformed the other four CNN-based models. The number of trainable parameters and FLOPs were the smallest among the investigated models. Our proposed model can accurately sort head MRI scans and identify artifacts with minimum computational resources and can be used as a tool to support big medical imaging data research and facilitate large-scale database management.
Collapse
Affiliation(s)
- Ronghui Gao
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Guoting Luo
- Department of Radiology, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| | - Renxin Ding
- IT Center, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Bo Yang
- IT Center, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Huaiqiang Sun
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, Sichuan, China.
- Huaxi MR Research Center, West China Hospital of Sichuan University, Chengdu, Sichuan, China.
- Huaxi MR Research Center, Department of Radiology, West China Hospital of Sichuan University, Chengdu, 610041, Sichuan, China.
| |
Collapse
|
43
|
Zelger P, Brunner A, Zelger B, Willenbacher E, Unterberger SH, Stalder R, Huck CW, Willenbacher W, Pallua JD. Deep learning analysis of mid-infrared microscopic imaging data for the diagnosis and classification of human lymphomas. JOURNAL OF BIOPHOTONICS 2023; 16:e202300015. [PMID: 37578837 DOI: 10.1002/jbio.202300015] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 07/19/2023] [Accepted: 08/09/2023] [Indexed: 08/15/2023]
Abstract
The present study presents an alternative analytical workflow that combines mid-infrared (MIR) microscopic imaging and deep learning to diagnose human lymphoma and differentiate between small and large cell lymphoma. We could show that using a deep learning approach to analyze MIR hyperspectral data obtained from benign and malignant lymph node pathology results in high accuracy for correct classification, learning the distinct region of 3900 to 850 cm-1 . The accuracy is above 95% for every pair of malignant lymphoid tissue and still above 90% for the distinction between benign and malignant lymphoid tissue for binary classification. These results demonstrate that a preliminary diagnosis and subtyping of human lymphoma could be streamlined by applying a deep learning approach to analyze MIR spectroscopic data.
Collapse
Affiliation(s)
- P Zelger
- University Hospital of Hearing, Voice and Speech Disorders, Medical University of Innsbruck, Innsbruck, Austria
| | - A Brunner
- Institute of Pathology, Neuropathology and Molecular Pathology, Medical University of Innsbruck, Innsbruck, Austria
| | - B Zelger
- Institute of Pathology, Neuropathology and Molecular Pathology, Medical University of Innsbruck, Innsbruck, Austria
| | - E Willenbacher
- University Hospital of Internal Medicine V, Hematology & Oncology, Medical University of Innsbruck, Innsbruck, Austria
| | - S H Unterberger
- Institute of Material-Technology, Leopold-Franzens University Innsbruck, Innsbruck, Austria
| | - R Stalder
- Institute of Mineralogy and Petrography, Leopold-Franzens University Innsbruck, Innsbruck, Austria
| | - C W Huck
- Institute of Analytical Chemistry and Radiochemistry, Innsbruck, Austria
| | - W Willenbacher
- University Hospital of Internal Medicine V, Hematology & Oncology, Medical University of Innsbruck, Innsbruck, Austria
- Oncotyrol, Centre for Personalized Cancer Medicine, Innsbruck, Austria
| | - J D Pallua
- University Hospital for Orthopedics and Traumatology, Medical University of Innsbruck, Innsbruck, Austria
| |
Collapse
|
44
|
Choi J, Marwaha JS. Clinical prediction tool pitfalls and considerations: Data and algorithms. Surgery 2023; 174:1270-1272. [PMID: 37709646 DOI: 10.1016/j.surg.2023.08.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 08/02/2023] [Accepted: 08/08/2023] [Indexed: 09/16/2023]
Abstract
In recent years, many surgical prediction models have been developed and published to augment surgeon decision-making, predict postoperative patient trajectories, and more. Collectively underlying all of these models is a wide variety of data sources and algorithms. Each data set and algorithm has its unique strengths, weaknesses, and type of prediction task for which it is best suited. The purpose of this piece is to highlight important characteristics of common data sources and algorithms used in surgical prediction model development so that future researchers interested in developing models of their own may be able to critically evaluate them and select the optimal ones for their study.
Collapse
Affiliation(s)
- Jeff Choi
- Department of Surgery, Stanford University, Stanford, CA. https://www.twitter.com/JeffChoi01
| | - Jayson S Marwaha
- Department of Surgery, Georgetown University Medical Center, Washington, DC.
| |
Collapse
|
45
|
Breto AL, Cullison K, Zacharaki EI, Wallaengen V, Maziero D, Jones K, Valderrama A, de la Fuente MI, Meshman J, Azzam GA, Ford JC, Stoyanova R, Mellon EA. A Deep Learning Approach for Automatic Segmentation during Daily MRI-Linac Radiotherapy of Glioblastoma. Cancers (Basel) 2023; 15:5241. [PMID: 37958415 PMCID: PMC10647471 DOI: 10.3390/cancers15215241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 10/25/2023] [Accepted: 10/30/2023] [Indexed: 11/15/2023] Open
Abstract
Glioblastoma changes during chemoradiotherapy are inferred from high-field MRI before and after treatment but are rarely investigated during radiotherapy. The purpose of this study was to develop a deep learning network to automatically segment glioblastoma tumors on daily treatment set-up scans from the first glioblastoma patients treated on MRI-linac. Glioblastoma patients were prospectively imaged daily during chemoradiotherapy on 0.35T MRI-linac. Tumor and edema (tumor lesion) and resection cavity kinetics throughout the treatment were manually segmented on these daily MRI. Utilizing a convolutional neural network, an automatic segmentation deep learning network was built. A nine-fold cross-validation schema was used to train the network using 80:10:10 for training, validation, and testing. Thirty-six glioblastoma patients were imaged pre-treatment and 30 times during radiotherapy (n = 31 volumes, total of 930 MRIs). The average tumor lesion and resection cavity volumes were 94.56 ± 64.68 cc and 72.44 ± 35.08 cc, respectively. The average Dice similarity coefficient between manual and auto-segmentation for tumor lesion and resection cavity across all patients was 0.67 and 0.84, respectively. This is the first brain lesion segmentation network developed for MRI-linac. The network performed comparably to the only other published network for auto-segmentation of post-operative glioblastoma lesions. Segmented volumes can be utilized for adaptive radiotherapy and propagated across multiple MRI contrasts to create a prognostic model for glioblastoma based on multiparametric MRI.
Collapse
Affiliation(s)
- Adrian L. Breto
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Kaylie Cullison
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Evangelia I. Zacharaki
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Veronica Wallaengen
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Danilo Maziero
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
- Department of Radiation Medicine & Applied Sciences, UC San Diego Health, La Jolla, CA 92093, USA
| | - Kolton Jones
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
- West Physics, Atlanta, GA 30339, USA
| | - Alessandro Valderrama
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Macarena I. de la Fuente
- Department of Neurology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA
| | - Jessica Meshman
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Gregory A. Azzam
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - John C. Ford
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Radka Stoyanova
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Eric A. Mellon
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| |
Collapse
|
46
|
Cheng PC, Chiang HHK. Diagnosis of Salivary Gland Tumors Using Transfer Learning with Fine-Tuning and Gradual Unfreezing. Diagnostics (Basel) 2023; 13:3333. [PMID: 37958229 PMCID: PMC10648910 DOI: 10.3390/diagnostics13213333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 10/25/2023] [Accepted: 10/27/2023] [Indexed: 11/15/2023] Open
Abstract
Ultrasound is the primary tool for evaluating salivary gland tumors (SGTs); however, tumor diagnosis currently relies on subjective features. This study aimed to establish an objective ultrasound diagnostic method using deep learning. We collected 446 benign and 223 malignant SGT ultrasound images in the training/validation set and 119 benign and 44 malignant SGT ultrasound images in the testing set. We trained convolutional neural network (CNN) models from scratch and employed transfer learning (TL) with fine-tuning and gradual unfreezing to classify malignant and benign SGTs. The diagnostic performances of these models were compared. By utilizing the pretrained ResNet50V2 with fine-tuning and gradual unfreezing, we achieved a 5-fold average validation accuracy of 0.920. The diagnostic performance on the testing set demonstrated an accuracy of 89.0%, a sensitivity of 81.8%, a specificity of 91.6%, a positive predictive value of 78.3%, and a negative predictive value of 93.2%. This performance surpasses that of other models in our study. The corresponding Grad-CAM visualizations were also presented to provide explanations for the diagnosis. This study presents an effective and objective ultrasound method for distinguishing between malignant and benign SGTs, which could assist in preoperative evaluation.
Collapse
Affiliation(s)
- Ping-Chia Cheng
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei 11221, Taiwan;
- Department of Otolaryngology Head and Neck Surgery, Far Eastern Memorial Hospital, New Taipei City 22060, Taiwan
- Department of Communication Engineering, Asia Eastern University of Science and Technology, New Taipei City 22060, Taiwan
| | - Hui-Hua Kenny Chiang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei 11221, Taiwan;
| |
Collapse
|
47
|
Alshahrani H, Sharma G, Anand V, Gupta S, Sulaiman A, Elmagzoub MA, Reshan MSA, Shaikh A, Azar AT. An Intelligent Attention-Based Transfer Learning Model for Accurate Differentiation of Bone Marrow Stains to Diagnose Hematological Disorder. Life (Basel) 2023; 13:2091. [PMID: 37895472 PMCID: PMC10607952 DOI: 10.3390/life13102091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 10/17/2023] [Accepted: 10/19/2023] [Indexed: 10/29/2023] Open
Abstract
Bone marrow (BM) is an essential part of the hematopoietic system, which generates all of the body's blood cells and maintains the body's overall health and immune system. The classification of bone marrow cells is pivotal in both clinical and research settings because many hematological diseases, such as leukemia, myelodysplastic syndromes, and anemias, are diagnosed based on specific abnormalities in the number, type, or morphology of bone marrow cells. There is a requirement for developing a robust deep-learning algorithm to diagnose bone marrow cells to keep a close check on them. This study proposes a framework for categorizing bone marrow cells into seven classes. In the proposed framework, five transfer learning models-DenseNet121, EfficientNetB5, ResNet50, Xception, and MobileNetV2-are implemented into the bone marrow dataset to classify them into seven classes. The best-performing DenseNet121 model was fine-tuned by adding one batch-normalization layer, one dropout layer, and two dense layers. The proposed fine-tuned DenseNet121 model was optimized using several optimizers, such as AdaGrad, AdaDelta, Adamax, RMSprop, and SGD, along with different batch sizes of 16, 32, 64, and 128. The fine-tuned DenseNet121 model was integrated with an attention mechanism to improve its performance by allowing the model to focus on the most relevant features or regions of the image, which can be particularly beneficial in medical imaging, where certain regions might have critical diagnostic information. The proposed fine-tuned and integrated DenseNet121 achieved the highest accuracy, with a training success rate of 99.97% and a testing success rate of 97.01%. The key hyperparameters, such as batch size, number of epochs, and different optimizers, were all considered for optimizing these pre-trained models to select the best model. This study will help in medical research to effectively classify the BM cells to prevent diseases like leukemia.
Collapse
Affiliation(s)
- Hani Alshahrani
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, Najran 66462, Saudi Arabia; (H.A.); (A.S.)
| | - Gunjan Sharma
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India; (G.S.); (V.A.); (S.G.)
| | - Vatsala Anand
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India; (G.S.); (V.A.); (S.G.)
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India; (G.S.); (V.A.); (S.G.)
| | - Adel Sulaiman
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, Najran 66462, Saudi Arabia; (H.A.); (A.S.)
| | - M. A. Elmagzoub
- Department of Network and Communication Engineering, College of Computer Science and Information Systems, Najran University, Najran 61441, Saudi Arabia;
| | - Mana Saleh Al Reshan
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran 66462, Saudi Arabia; (M.S.A.R.); (A.S.)
| | - Asadullah Shaikh
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran 66462, Saudi Arabia; (M.S.A.R.); (A.S.)
| | - Ahmad Taher Azar
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
- Automated Systems and Soft Computing Lab (ASSCL), Prince Sultan University, Riyadh 11586, Saudi Arabia
| |
Collapse
|
48
|
Park JH, Moon HS, Jung HI, Hwang J, Choi YH, Kim JE. Deep learning and clustering approaches for dental implant size classification based on periapical radiographs. Sci Rep 2023; 13:16856. [PMID: 37803022 PMCID: PMC10558577 DOI: 10.1038/s41598-023-42385-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 09/09/2023] [Indexed: 10/08/2023] Open
Abstract
This study investigated two artificial intelligence (AI) methods for automatically classifying dental implant diameter and length based on periapical radiographs. The first method, deep learning (DL), involved utilizing the pre-trained VGG16 model and adjusting the fine-tuning degree to analyze image data obtained from periapical radiographs. The second method, clustering analysis, was accomplished by analyzing the implant-specific feature vector derived from three key points coordinates of the dental implant using the k-means++ algorithm and adjusting the weight of the feature vector. DL and clustering model classified dental implant size into nine groups. The performance metrics of AI models were accuracy, sensitivity, specificity, F1-score, positive predictive value, negative predictive value, and area under the receiver operating characteristic curve (AUC-ROC). The final DL model yielded performances above 0.994, 0.950, 0.994, 0.974, 0.952, 0.994, and 0.975, respectively, and the final clustering model yielded performances above 0.983, 0.900, 0.988, 0.923, 0.909, 0.988, and 0.947, respectively. When comparing the AI model before tuning and the final AI model, statistically significant performance improvements were observed in six out of nine groups for DL models and four out of nine groups for clustering models based on AUC-ROC. Two AI models showed reliable classification performances. For clinical applications, AI models require validation on various multicenter data.
Collapse
Affiliation(s)
- Ji-Hyun Park
- Department of Prosthodontics, Yonsei University College of Dentistry, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Korea
| | - Hong Seok Moon
- Department of Prosthodontics, Yonsei University College of Dentistry, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Korea
| | - Hoi-In Jung
- Department of Preventive Dentistry and Public Oral Health, Yonsei University College of Dentistry, Seoul, 03722, Korea
| | - JaeJoon Hwang
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Dental Research Institute, Pusan National University, Busan, 50612, Korea
| | - Yoon-Ho Choi
- School of Computer Science and Engineering, Pusan National University, Busan, 46241, Korea
| | - Jong-Eun Kim
- Department of Prosthodontics, Yonsei University College of Dentistry, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Korea.
| |
Collapse
|
49
|
Ma T, Wang H, Ye Z. Artificial intelligence applications in computed tomography in gastric cancer: a narrative review. Transl Cancer Res 2023; 12:2379-2392. [PMID: 37859746 PMCID: PMC10583011 DOI: 10.21037/tcr-23-201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 08/01/2023] [Indexed: 10/21/2023]
Abstract
Background and Objective Artificial intelligence (AI) is a revolutionary technique which is deeply impacting and reshaping clinical practice in oncology. This review aims to summarize the current status of the clinical application of AI-based computed tomography (CT) for gastric cancer (GC), focusing on diagnosis, genetic status detection and risk prediction of metastasis, prognosis and treatment efficacy. The challenges and prospects for future research will also be discussed. Methods We searched the PubMed/MEDLINE database to identify clinical studies published between 1990 and November 2022 that investigated AI applications in CT in GC. The major findings of the verified studies were summarized. Key Content and Findings AI applications in CT images have attracted considerable attention in various fields such as diagnosis, prediction of metastasis risk, survival, and treatment response. These emerging techniques have shown a high potential to outperform clinicians in diagnostic accuracy and time-saving. Conclusions AI-powered tools showed great potential to increase diagnostic accuracy and reduce radiologists' workload. However, the goal of AI is not to replace human ability but to help oncologists make decisions in their practice. Therefore, radiologists should play a predominant role in AI applications and decide the best ways to integrate these complementary techniques within clinical practice.
Collapse
Affiliation(s)
- Tingting Ma
- Department of Radiology, Tianjin Cancer Hospital Airport Hospital, Tianjin, China
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- National Clinical Research Center for Cancer, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- The Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Hua Wang
- Department of Radiology, Tianjin Cancer Hospital Airport Hospital, Tianjin, China
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- National Clinical Research Center for Cancer, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- The Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Zhaoxiang Ye
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- National Clinical Research Center for Cancer, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- The Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| |
Collapse
|
50
|
Wanjiku RN, Nderu L, Kimwele M. Improved transfer learning using textural features conflation and dynamically fine-tuned layers. PeerJ Comput Sci 2023; 9:e1601. [PMID: 37810335 PMCID: PMC10557498 DOI: 10.7717/peerj-cs.1601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 08/29/2023] [Indexed: 10/10/2023]
Abstract
Transfer learning involves using previously learnt knowledge of a model task in addressing another task. However, this process works well when the tasks are closely related. It is, therefore, important to select data points that are closely relevant to the previous task and fine-tune the suitable pre-trained model's layers for effective transfer. This work utilises the least divergent textural features of the target datasets and pre-trained model's layers, minimising the lost knowledge during the transfer learning process. This study extends previous works on selecting data points with good textural features and dynamically selected layers using divergence measures by combining them into one model pipeline. Five pre-trained models are used: ResNet50, DenseNet169, InceptionV3, VGG16 and MobileNetV2 on nine datasets: CIFAR-10, CIFAR-100, MNIST, Fashion-MNIST, Stanford Dogs, Caltech 256, ISIC 2016, ChestX-ray8 and MIT Indoor Scenes. Experimental results show that data points with lower textural feature divergence and layers with more positive weights give better accuracy than other data points and layers. The data points with lower divergence give an average improvement of 3.54% to 6.75%, while the layers improve by 2.42% to 13.04% for the CIFAR-100 dataset. Combining the two methods gives an extra accuracy improvement of 1.56%. This combined approach shows that data points with lower divergence from the source dataset samples can lead to a better adaptation for the target task. The results also demonstrate that selecting layers with more positive weights reduces instances of trial and error in selecting fine-tuning layers for pre-trained models.
Collapse
Affiliation(s)
| | - Lawrence Nderu
- Computing, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya
| | - Michael Kimwele
- Computing, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya
| |
Collapse
|