1
|
Muhammad D, Bendechache M. Unveiling the black box: A systematic review of Explainable Artificial Intelligence in medical image analysis. Comput Struct Biotechnol J 2024; 24:542-560. [PMID: 39252818 PMCID: PMC11382209 DOI: 10.1016/j.csbj.2024.08.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 08/07/2024] [Accepted: 08/07/2024] [Indexed: 09/11/2024] Open
Abstract
This systematic literature review examines state-of-the-art Explainable Artificial Intelligence (XAI) methods applied to medical image analysis, discussing current challenges and future research directions, and exploring evaluation metrics used to assess XAI approaches. With the growing efficiency of Machine Learning (ML) and Deep Learning (DL) in medical applications, there's a critical need for adoption in healthcare. However, their "black-box" nature, where decisions are made without clear explanations, hinders acceptance in clinical settings where decisions have significant medicolegal consequences. Our review highlights the advanced XAI methods, identifying how they address the need for transparency and trust in ML/DL decisions. We also outline the challenges faced by these methods and propose future research directions to improve XAI in healthcare. This paper aims to bridge the gap between cutting-edge computational techniques and their practical application in healthcare, nurturing a more transparent, trustworthy, and effective use of AI in medical settings. The insights guide both research and industry, promoting innovation and standardisation in XAI implementation in healthcare.
Collapse
Affiliation(s)
- Dost Muhammad
- ADAPT Research Centre, School of Computer Science, University of Galway, Galway, Ireland
| | - Malika Bendechache
- ADAPT Research Centre, School of Computer Science, University of Galway, Galway, Ireland
| |
Collapse
|
2
|
Xu H, Yang X, Hu Y, Wang D, Liang Z, Mu H, Wang Y, Shi L, Gao H, Song D, Cheng Z, Lu Z, Zhao X, Lu J, Wang B, Hu Z. Trusted artificial intelligence for environmental assessments: An explainable high-precision model with multi-source big data. ENVIRONMENTAL SCIENCE AND ECOTECHNOLOGY 2024; 22:100479. [PMID: 39286480 PMCID: PMC11402945 DOI: 10.1016/j.ese.2024.100479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 08/19/2024] [Accepted: 08/22/2024] [Indexed: 09/19/2024]
Abstract
Environmental assessments are critical for ensuring the sustainable development of human civilization. The integration of artificial intelligence (AI) in these assessments has shown great promise, yet the "black box" nature of AI models often undermines trust due to the lack of transparency in their decision-making processes, even when these models demonstrate high accuracy. To address this challenge, we evaluated the performance of a transformer model against other AI approaches, utilizing extensive multivariate and spatiotemporal environmental datasets encompassing both natural and anthropogenic indicators. We further explored the application of saliency maps as a novel explainability tool in multi-source AI-driven environmental assessments, enabling the identification of individual indicators' contributions to the model's predictions. We find that the transformer model outperforms others, achieving an accuracy of about 98% and an area under the receiver operating characteristic curve (AUC) of 0.891. Regionally, the environmental assessment values are predominantly classified as level II or III in the central and southwestern study areas, level IV in the northern region, and level V in the western region. Through explainability analysis, we identify that water hardness, total dissolved solids, and arsenic concentrations are the most influential indicators in the model. Our AI-driven environmental assessment model is accurate and explainable, offering actionable insights for targeted environmental management. Furthermore, this study advances the application of AI in environmental science by presenting a robust, explainable model that bridges the gap between machine learning and environmental governance, enhancing both understanding and trust in AI-assisted environmental assessments.
Collapse
Affiliation(s)
- Haoli Xu
- State Key Laboratory of Pulsed Power Laser, College of Electronic Engineering, National University of Defense Technology, Hefei, 230037, China
- Jianghuai Advance Technology Center, Hefei, 230000, China
- Key Laboratory of Electronic Restriction of Anhui Province, Hefei, 230037, China
| | - Xing Yang
- State Key Laboratory of Pulsed Power Laser, College of Electronic Engineering, National University of Defense Technology, Hefei, 230037, China
- Key Laboratory of Electronic Restriction of Anhui Province, Hefei, 230037, China
| | - Yihua Hu
- State Key Laboratory of Pulsed Power Laser, College of Electronic Engineering, National University of Defense Technology, Hefei, 230037, China
- Key Laboratory of Electronic Restriction of Anhui Province, Hefei, 230037, China
| | - Daqing Wang
- Defense Engineering College, Army Engineering University of PLA, Nanjing, 210007, China
| | - Zhenyu Liang
- State Key Laboratory of Pulsed Power Laser, College of Electronic Engineering, National University of Defense Technology, Hefei, 230037, China
- Key Laboratory of Electronic Restriction of Anhui Province, Hefei, 230037, China
| | - Hua Mu
- State Key Laboratory of Pulsed Power Laser, College of Electronic Engineering, National University of Defense Technology, Hefei, 230037, China
- Key Laboratory of Electronic Restriction of Anhui Province, Hefei, 230037, China
| | - Yangyang Wang
- State Key Laboratory of Pulsed Power Laser, College of Electronic Engineering, National University of Defense Technology, Hefei, 230037, China
- Key Laboratory of Electronic Restriction of Anhui Province, Hefei, 230037, China
| | - Liang Shi
- Jianghuai Advance Technology Center, Hefei, 230000, China
| | - Haoqi Gao
- State Key Laboratory of Pulsed Power Laser, College of Electronic Engineering, National University of Defense Technology, Hefei, 230037, China
- Key Laboratory of Electronic Restriction of Anhui Province, Hefei, 230037, China
| | - Daoqing Song
- International Studies College, National University of Defense Technology, Nanjing, 210000, China
| | - Zijian Cheng
- Defense Engineering College, Army Engineering University of PLA, Nanjing, 210007, China
| | - Zhao Lu
- Defense Engineering College, Army Engineering University of PLA, Nanjing, 210007, China
| | - Xiaoning Zhao
- Defense Engineering College, Army Engineering University of PLA, Nanjing, 210007, China
| | - Jun Lu
- State Key Laboratory of Pulsed Power Laser, College of Electronic Engineering, National University of Defense Technology, Hefei, 230037, China
- Key Laboratory of Electronic Restriction of Anhui Province, Hefei, 230037, China
| | - Bingwen Wang
- State Key Laboratory of Pulsed Power Laser, College of Electronic Engineering, National University of Defense Technology, Hefei, 230037, China
- Key Laboratory of Electronic Restriction of Anhui Province, Hefei, 230037, China
| | - Zhiyang Hu
- School of Electrical Engineering and Automation, Hefei University of Technology, Hefei, 230009, China
| |
Collapse
|
3
|
Ng Yin Ling C, Zhu X, Ang M. Artificial intelligence in myopia in children: current trends and future directions. Curr Opin Ophthalmol 2024; 35:463-471. [PMID: 39259652 DOI: 10.1097/icu.0000000000001086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
PURPOSE OF REVIEW Myopia is one of the major causes of visual impairment globally, with myopia and its complications thus placing a heavy healthcare and economic burden. With most cases of myopia developing during childhood, interventions to slow myopia progression are most effective when implemented early. To address this public health challenge, artificial intelligence has emerged as a potential solution in childhood myopia management. RECENT FINDINGS The bulk of artificial intelligence research in childhood myopia was previously focused on traditional machine learning models for the identification of children at high risk for myopia progression. Recently, there has been a surge of literature with larger datasets, more computational power, and more complex computation models, leveraging artificial intelligence for novel approaches including large-scale myopia screening using big data, multimodal data, and advancing imaging technology for myopia progression, and deep learning models for precision treatment. SUMMARY Artificial intelligence holds significant promise in transforming the field of childhood myopia management. Novel artificial intelligence modalities including automated machine learning, large language models, and federated learning could play an important role in the future by delivering precision medicine, improving health literacy, and allowing the preservation of data privacy. However, along with these advancements in technology come practical challenges including regulation and clinical integration.
Collapse
Affiliation(s)
| | - Xiangjia Zhu
- Eye Institute and Department of Ophthalmology, Eye & ENT Hospital, Fudan University
- NHC Key Laboratory of Myopia and Related Eye Diseases; Key Laboratory of Myopia and Related Eye Diseases, Chinese Academy of Medical Sciences
- Shanghai Key Laboratory of Visual Impairment and Restoration, Shanghai, China
| | - Marcus Ang
- Singapore National Eye Centre, Singapore
- Singapore Eye Research Institute
- Department of Ophthalmology and Visual Sciences, Duke-NUS Medical School, Singapore
| |
Collapse
|
4
|
Sengupta PP, Dey D, Davies RH, Duchateau N, Yanamala N. Challenges for augmenting intelligence in cardiac imaging. Lancet Digit Health 2024; 6:e739-e748. [PMID: 39214759 DOI: 10.1016/s2589-7500(24)00142-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 05/15/2024] [Accepted: 06/17/2024] [Indexed: 09/04/2024]
Abstract
Artificial Intelligence (AI), through deep learning, has brought automation and predictive capabilities to cardiac imaging. However, despite considerable investment, tangible health-care cost reductions remain unproven. Although AI holds promise, there has been insufficient time for both methodological development and prospective clinical trials to establish its advantage over human interpretations in terms of its effect on patient outcomes. Challenges such as data scarcity, privacy issues, and ethical concerns impede optimal AI training. Furthermore, the absence of a unified model for the complex structure and function of the heart and evolving domain knowledge can introduce heuristic biases and influence underlying assumptions in model development. Integrating AI into diverse institutional picture archiving and communication systems and devices also presents a clinical hurdle. This hurdle is further compounded by an absence of high-quality labelled data, difficulty sharing data between institutions, and non-uniform and inadequate gold standards for external validations and comparisons of model performance in real-world settings. Nevertheless, there is a strong push in industry and academia for AI solutions in medical imaging. This Series paper reviews key studies and identifies challenges that require a pragmatic change in the approach for using AI for cardiac imaging, whereby AI is viewed as augmented intelligence to complement, not replace, human judgement. The focus should shift from isolated measurements to integrating non-linear and complex data towards identifying disease phenotypes-emphasising pattern recognition where AI excels. Algorithms should enhance imaging reports, enriching patients' understanding, communication between patients and clinicians, and shared decision making. The emergence of professional standards and guidelines is essential to address these developments and ensure the safe and effective integration of AI in cardiac imaging.
Collapse
Affiliation(s)
- Partho P Sengupta
- Division of Cardiovascular Disease and Hypertension, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA.
| | - Damini Dey
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Rhodri H Davies
- Institute of Cardiovascular Science, University College London, London, UK
| | - Nicolas Duchateau
- CREATIS, INSA, CNRS UMR 5220, INSERM U1294, Université Lyon 1, UJM Saint-Etienne, Lyon, France; Institut Universitaire de France, Paris, France
| | - Naveena Yanamala
- Division of Cardiovascular Disease and Hypertension, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| |
Collapse
|
5
|
Bajaj S, Bala M, Angurala M. A comparative analysis of different augmentations for brain images. Med Biol Eng Comput 2024; 62:3123-3150. [PMID: 38782880 DOI: 10.1007/s11517-024-03127-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 05/10/2024] [Indexed: 05/25/2024]
Abstract
Deep learning (DL) requires a large amount of training data to improve performance and prevent overfitting. To overcome these difficulties, we need to increase the size of the training dataset. This can be done by augmentation on a small dataset. The augmentation approaches must enhance the model's performance during the learning period. There are several types of transformations that can be applied to medical images. These transformations can be applied to the entire dataset or to a subset of the data, depending on the desired outcome. In this study, we categorize data augmentation methods into four groups: Absent augmentation, where no modifications are made; basic augmentation, which includes brightness and contrast adjustments; intermediate augmentation, encompassing a wider array of transformations like rotation, flipping, and shifting in addition to brightness and contrast adjustments; and advanced augmentation, where all transformation layers are employed. We plan to conduct a comprehensive analysis to determine which group performs best when applied to brain CT images. This evaluation aims to identify the augmentation group that produces the most favorable results in terms of improving model accuracy, minimizing diagnostic errors, and ensuring the robustness of the model in the context of brain CT image analysis.
Collapse
Affiliation(s)
- Shilpa Bajaj
- Applied Sciences (Computer Applications), I.K. Gujral Punjab Technical University, Jalandhar, Kapurthala, India.
| | - Manju Bala
- Department of Computer Science and Engineering, Khalsa College of Engineering and Technology, Amritsar, India
| | - Mohit Angurala
- Apex Institute of Technology (CSE), Chandigarh University, Gharuan, Mohali, Punjab, India
| |
Collapse
|
6
|
Mamalakis M, Macfarlane SC, Notley SV, Gad AKB, Panoutsos G. A novel pipeline employing deep multi-attention channels network for the autonomous detection of metastasizing cells through fluorescence microscopy. Comput Biol Med 2024; 181:109052. [PMID: 39216406 DOI: 10.1016/j.compbiomed.2024.109052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Revised: 08/09/2024] [Accepted: 08/20/2024] [Indexed: 09/04/2024]
Abstract
Metastasis driven by cancer cell migration is the leading cause of cancer-related deaths. It involves significant changes in the organization of the cytoskeleton, which includes the actin microfilaments and the vimentin intermediate filaments. Understanding how these filament change cells from normal to invasive offers insights that can be used to improve cancer diagnosis and therapy. We have developed a computational, transparent, large-scale and imaging-based pipeline, that can distinguish between normal human cells and their isogenically matched, oncogenically transformed, invasive and metastasizing counterparts, based on the spatial organization of actin and vimentin filaments in the cell cytoplasm. Due to the intricacy of these subcellular structures, manual annotation is not trivial to automate. We used established deep learning methods and our new multi-attention channel architecture. To ensure a high level of interpretability of the network, which is crucial for the application area, we developed an interpretable global explainable approach correlating the weighted geometric mean of the total cell images and their local GradCam scores. The methods offer detailed, objective and measurable understanding of how different components of the cytoskeleton contribute to metastasis, insights that can be used for future development of novel diagnostic tools, such as a nanometer level, vimentin filament-based biomarker for digital pathology, and for new treatments that significantly can increase patient survival.
Collapse
Affiliation(s)
- Michail Mamalakis
- School of Electrical and Electronic Engineering, University of Sheffield, Sheffield, UK; Insigneo Institute for in-silico, Medicine, University of Sheffield, Sheffield, UK; Department of Infection, Immunity and Cardiovascular Disease, and Department of Computer science, Sheffield, UK; Department of Psychiatry, Cambridge University, Cambridge, UK.
| | - Sarah C Macfarlane
- Department of Oncology and Metabolism, The Medical School, University of Sheffield, Sheffield, UK
| | - Scott V Notley
- Insigneo Institute for in-silico, Medicine, University of Sheffield, Sheffield, UK; Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, UK
| | - Annica K B Gad
- Insigneo Institute for in-silico, Medicine, University of Sheffield, Sheffield, UK; Department of Oncology and Metabolism, The Medical School, University of Sheffield, Sheffield, UK; Madeira Chemistry Research Centre, University of Madeira, Funchal, Portugal; Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden
| | - George Panoutsos
- School of Electrical and Electronic Engineering, University of Sheffield, Sheffield, UK; Insigneo Institute for in-silico, Medicine, University of Sheffield, Sheffield, UK; Department of Oncology and Metabolism, The Medical School, University of Sheffield, Sheffield, UK.
| |
Collapse
|
7
|
Zhou H, Lin S, Watson M, Bernadt CT, Zhang O, Liao L, Govindan R, Cote RJ, Yang C. Length-scale study in deep learning prediction for non-small cell lung cancer brain metastasis. Sci Rep 2024; 14:22328. [PMID: 39333630 PMCID: PMC11436900 DOI: 10.1038/s41598-024-73428-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Accepted: 09/17/2024] [Indexed: 09/29/2024] Open
Abstract
Deep learning-assisted digital pathology has demonstrated the potential to profoundly impact clinical practice, even surpassing human pathologists in performance. However, as deep neural network (DNN) architectures grow in size and complexity, their explainability decreases, posing challenges in interpreting pathology features for broader clinical insights into physiological diseases. To better assess the interpretability of digital microscopic images and guide future microscopic system design, we developed a novel method to study the predictive feature length-scale that underpins a DNN's predictive power. We applied this method to analyze a DNN's capability in predicting brain metastasis from early-stage non-small-cell lung cancer biopsy slides. This study quantifies DNN's attention for brain metastasis prediction, targeting features at both the cellular scale and tissue scale in H&E-stained histological whole slide images. At the cellular scale, the predictive power of DNNs progressively increases with higher resolution and significantly decreases when the resolvable feature length exceeds 5 microns. Additionally, DNN uses more macro-scale features associated with tissue architecture and is optimized when assessing visual fields greater than 41 microns. Our study computes the length-scale requirements for optimal DNN learning on digital whole-slide microscopic images, holding the promise to guide future optical microscope designs in pathology applications and facilitating downstream deep learning analysis.
Collapse
Affiliation(s)
- Haowen Zhou
- Department of Electrical Engineering, California Institute of Technology, Pasadena, CA, 91125, USA
| | - Siyu Lin
- Department of Electrical Engineering, California Institute of Technology, Pasadena, CA, 91125, USA
| | - Mark Watson
- Department of Pathology and Immunology, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Cory T Bernadt
- Department of Pathology and Immunology, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Oumeng Zhang
- Department of Electrical Engineering, California Institute of Technology, Pasadena, CA, 91125, USA
| | - Ling Liao
- Department of Pathology and Immunology, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Ramaswamy Govindan
- Department of Medicine, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Richard J Cote
- Department of Pathology and Immunology, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Changhuei Yang
- Department of Electrical Engineering, California Institute of Technology, Pasadena, CA, 91125, USA.
| |
Collapse
|
8
|
Li Y, Cai P, Huang Y, Yu W, Liu Z, Liu P. Deep learning based detection and classification of fetal lip in ultrasound images. J Perinat Med 2024; 52:769-777. [PMID: 39028804 DOI: 10.1515/jpm-2024-0122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Accepted: 07/07/2024] [Indexed: 07/21/2024]
Abstract
OBJECTIVES Fetal cleft lip is a common congenital defect. Considering the delicacy and difficulty of observing fetal lips, we have utilized deep learning technology to develop a new model aimed at quickly and accurately assessing the development of fetal lips during prenatal examinations. This model can detect ultrasound images of the fetal lips and classify them, aiming to provide a more objective prediction for the development of fetal lips. METHODS This study included 632 pregnant women in their mid-pregnancy stage, who underwent ultrasound examinations of the fetal lips, collecting both normal and abnormal fetal lip ultrasound images. To improve the accuracy of the detection and classification of fetal lips, we proposed and validated the Yolov5-ECA model. RESULTS The experimental results show that, compared with the currently popular 10 models, our model achieved the best results in the detection and classification of fetal lips. In terms of the detection of fetal lips, the mean average precision (mAP) at 0.5 and mAP at 0.5:0.95 were 0.920 and 0.630, respectively. In the classification of fetal lip ultrasound images, the accuracy reached 0.925. CONCLUSIONS The deep learning algorithm has accuracy consistent with manual evaluation in the detection and classification process of fetal lips. This automated recognition technology can provide a powerful tool for inexperienced young doctors, helping them to accurately conduct examinations and diagnoses of fetal lips.
Collapse
Affiliation(s)
- Yapeng Li
- School of Medicine, Huaqiao University, Quanzhou, China
| | - Peiya Cai
- Department of Gynecology and Obstetrics, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Yubing Huang
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Weifeng Yu
- Department of Ultrasound, Quanzhou First Hospital Affiliated to Fujian Medical University, Quanzhou, China
| | - Zhonghua Liu
- Department of Ultrasound, Quanzhou First Hospital Affiliated to Fujian Medical University, Quanzhou, China
| | - Peizhong Liu
- School of Medicine, Huaqiao University, Quanzhou, China
- College of Engineering, Huaqiao University, Quanzhou, China
| |
Collapse
|
9
|
Pan Y, Gou F, Xiao C, Liu J, Zhou J. Semi-supervised recognition for artificial intelligence assisted pathology image diagnosis. Sci Rep 2024; 14:21984. [PMID: 39304708 DOI: 10.1038/s41598-024-70750-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Accepted: 08/20/2024] [Indexed: 09/22/2024] Open
Abstract
The analysis and interpretation of cytopathological images are crucial in modern medical diagnostics. However, manually locating and identifying relevant cells from the vast amount of image data can be a daunting task. This challenge is particularly pronounced in developing countries where there may be a shortage of medical expertise to handle such tasks. The challenge of acquiring large amounts of high-quality labelled data remains, many researchers have begun to use semi-supervised learning methods to learn from unlabeled data. Although current semi-supervised learning models partially solve the issue of limited labelled data, they are inefficient in exploiting unlabeled samples. To address this, we introduce a new AI-assisted semi-supervised scheme, the Reliable-Unlabeled Semi-Supervised Segmentation (RU3S) model. This model integrates the ResUNet-SE-ASPP-Attention (RSAA) model, which includes the Squeeze-and-Excitation (SE) network, Atrous Spatial Pyramid Pooling (ASPP) structure, Attention module, and ResUNet architecture. Our model leverages unlabeled data effectively, improving accuracy significantly. A novel confidence filtering strategy is introduced to make better use of unlabeled samples, addressing the scarcity of labelled data. Experimental results show a 2.0% improvement in mIoU accuracy over the current state-of-the-art semi-supervised segmentation model ST, demonstrating our approach's effectiveness in solving this medical problem.
Collapse
Affiliation(s)
- Yao Pan
- School of Computer Science, Jiangxi University of Traditional Chinese Medicine, Nanchang, 330004, China
| | - Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China.
| | - Chunwen Xiao
- The Second People's Hospital of Huaihua, Huaihua, 418000, China
| | - Jun Liu
- The Second People's Hospital of Huaihua, Huaihua, 418000, China.
| | - Jing Zhou
- Hunan University of Medicine General Hospital, Huaihua, 418000, China.
| |
Collapse
|
10
|
Li Y, Zhuo Z, Weng J, Haller S, Bai HX, Li B, Liu X, Zhu M, Wang Z, Li J, Qiu X, Liu Y. A deep learning model for differentiating paediatric intracranial germ cell tumour subtypes and predicting survival with MRI: a multicentre prospective study. BMC Med 2024; 22:375. [PMID: 39256746 PMCID: PMC11389594 DOI: 10.1186/s12916-024-03575-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Accepted: 08/20/2024] [Indexed: 09/12/2024] Open
Abstract
BACKGROUND The pretherapeutic differentiation of subtypes of primary intracranial germ cell tumours (iGCTs), including germinomas (GEs) and nongerminomatous germ cell tumours (NGGCTs), is essential for clinical practice because of distinct treatment strategies and prognostic profiles of these diseases. This study aimed to develop a deep learning model, iGNet, to assist in the differentiation and prognostication of iGCT subtypes by employing pretherapeutic MR T2-weighted imaging. METHODS The iGNet model, which is based on the nnUNet architecture, was developed using a retrospective dataset of 280 pathologically confirmed iGCT patients. The training dataset included 83 GEs and 117 NGGCTs, while the retrospective internal test dataset included 31 GEs and 49 NGGCTs. The model's diagnostic performance was then assessed with the area under the receiver operating characteristic curve (AUC) in a prospective internal dataset (n = 22) and two external datasets (n = 22 and 20). Next, we compared the diagnostic performance of six neuroradiologists with or without the assistance of iGNet. Finally, the predictive ability of the output of iGNet for progression-free and overall survival was assessed and compared to that of the pathological diagnosis. RESULTS iGNet achieved high diagnostic performance, with AUCs between 0.869 and 0.950 across the four test datasets. With the assistance of iGNet, the six neuroradiologists' diagnostic AUCs (averages of the four test datasets) increased by 9.22% to 17.90%. There was no significant difference between the output of iGNet and the results of pathological diagnosis in predicting progression-free and overall survival (P = .889). CONCLUSIONS By leveraging pretherapeutic MR imaging data, iGNet accurately differentiates iGCT subtypes, facilitating prognostic evaluation and increasing the potential for tailored treatment.
Collapse
Affiliation(s)
- Yanong Li
- Department of Radiation Oncology, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Zhizheng Zhuo
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Jinyuan Weng
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Sven Haller
- UCL Institutes of Neurology and Healthcare Engineering, London, WC1E 6BT, UK
| | - Harrison X Bai
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, 21287, USA
| | - Bo Li
- Department of Radiation Oncology, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China
| | - Xing Liu
- Department of Neuropathology, Beijing Neurosurgery Institute, Beijing, 100070, China
| | - Mingwang Zhu
- Department of Radiology, Beijing Sanbo Hospital, Capital Medical University, Beijing, 100093, China
| | - Zheng Wang
- Department of Radiation Oncology, Tianjin Huanhu Hospital, Tianjin Medical University, Tianjin, 300350, China
| | - Jane Li
- Department of Radiology, New York Presbyterian, Lower Manhattan Hospital, New York, NY, 10038, USA
| | - Xiaoguang Qiu
- Department of Radiation Oncology, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China.
| | - Yaou Liu
- Department of Radiology, Beijing Tiantan Hospital, Capital Medical University, Beijing, 100070, China.
| |
Collapse
|
11
|
Li Z, Gao J, Zhou H, Li X, Zheng T, Lin F, Wang X, Chu T, Wang Q, Wang S, Cao K, Liang Y, Zhao F, Xie H, Xu C, Zhang H, Niu Q, Ma H, Mao N. Multiregional dynamic contrast-enhanced MRI-based integrated system for predicting pathological complete response of axillary lymph node to neoadjuvant chemotherapy in breast cancer: multicentre study. EBioMedicine 2024; 107:105311. [PMID: 39191174 PMCID: PMC11400626 DOI: 10.1016/j.ebiom.2024.105311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 08/11/2024] [Accepted: 08/12/2024] [Indexed: 08/29/2024] Open
Abstract
BACKGROUND The accurate evaluation of axillary lymph node (ALN) response to neoadjuvant chemotherapy (NAC) in breast cancer holds great value. This study aimed to develop an artificial intelligence system utilising multiregional dynamic contrast-enhanced MRI (DCE-MRI) and clinicopathological characteristics to predict axillary pathological complete response (pCR) after NAC in breast cancer. METHODS This study included retrospective and prospective datasets from six medical centres in China between May 2018 and December 2023. A fully automated integrated system based on deep learning (FAIS-DL) was built to perform tumour and ALN segmentation and axillary pCR prediction sequentially. The predictive performance of FAIS-DL was assessed using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity. RNA sequencing analysis were conducted on 45 patients to explore the biological basis of FAIS-DL. FINDINGS 1145 patients (mean age, 50 years ±10 [SD]) were evaluated. Among these patients, 506 were in the training and validation sets (axillary pCR rate of 40.3%), 127 in the internal test set (axillary pCR rate of 37.8%), 414 in the pooled external test set (axillary pCR rate of 48.8%), and 98 in the prospective test set (axillary pCR rate of 43.9%). For predicting axillary pCR, FAIS-DL achieved AUCs of 0.95, 0.93, and 0.94 in the internal test set, pooled external test set, and prospective test set, respectively, which were also significantly higher than those of the clinical model and deep learning models based on single-regional DCE-MRI (all P < 0.05, DeLong test). In the pooled external and prospective test sets, the FAIS-DL decreased the unnecessary axillary lymph node dissection rate from 47.9% to 6.8%, and increased the benefit rate from 52.2% to 86.5%. RNA sequencing analysis revealed that high FAIS-DL scores were associated with the upregulation of immune-mediated genes and pathways. INTERPRETATION FAIS-DL has demonstrated satisfactory performance in predicting axillary pCR, which may guide the formulation of personalised treatment regimens for patients with breast cancer in clinical practice. FUNDING This study was supported by the National Natural Science Foundation of China (82371933), National Natural Science Foundation of Shandong Province of China (ZR2021MH120), Mount Taishan Scholars and Young Experts Program (tsqn202211378), Key Projects of China Medicine Education Association (2022KTM030), China Postdoctoral Science Foundation (314730), and Beijing Postdoctoral Research Foundation (2023-zz-012).
Collapse
Affiliation(s)
- Ziyin Li
- School of Medical Imaging, Binzhou Medical University, No. 346 Guanhai Road, Yantai, Shandong, 264003, China; Department of Radiology, Yantai Yuhuangding Hospital, Qingdao University, Yantai, Shandong, 264000, China
| | - Jing Gao
- Department of Radiology, Yantai Yuhuangding Hospital, Qingdao University, Yantai, Shandong, 264000, China
| | - Heng Zhou
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai, Shandong, 264005, China
| | - Xianglin Li
- School of Medical Imaging, Binzhou Medical University, No. 346 Guanhai Road, Yantai, Shandong, 264003, China
| | - Tiantian Zheng
- School of Medical Imaging, Binzhou Medical University, No. 346 Guanhai Road, Yantai, Shandong, 264003, China; Department of Radiology, Yantai Yuhuangding Hospital, Qingdao University, Yantai, Shandong, 264000, China
| | - Fan Lin
- Department of Radiology, Yantai Yuhuangding Hospital, Qingdao University, Yantai, Shandong, 264000, China
| | - Xiaodong Wang
- School of Medical Imaging, Binzhou Medical University, No. 346 Guanhai Road, Yantai, Shandong, 264003, China; Department of Radiology, Yantai Yuhuangding Hospital, Qingdao University, Yantai, Shandong, 264000, China
| | - Tongpeng Chu
- Big Data and Artificial Intelligence Laboratory, Yantai Yuhuangding Hospital, Qingdao University, Yantai, Shandong, 264000, China; Shandong Provincial Key Medical and Health Laboratory of Intelligent Diagnosis and Treatment for Women's Diseases, Yantai Yuhuangding Hospital, Qingdao University, Yantai, Shandong, 264000, China
| | - Qi Wang
- Big Data and Artificial Intelligence Laboratory, Yantai Yuhuangding Hospital, Qingdao University, Yantai, Shandong, 264000, China; Shandong Provincial Key Medical and Health Laboratory of Intelligent Diagnosis and Treatment for Women's Diseases, Yantai Yuhuangding Hospital, Qingdao University, Yantai, Shandong, 264000, China
| | - Simin Wang
- Department of Radiology, Fudan University Cancer Center, Shanghai, 200433, China
| | - Kun Cao
- Department of Radiology, Beijing Cancer Hospital, Beijing, 100142, China
| | - Yun Liang
- Department of Radiology, Guilin Municipal Hospital of Traditional Chinese Medicine, Guilin, Yunnan, 541002, China
| | - Feng Zhao
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai, Shandong, 264005, China
| | - Haizhu Xie
- Department of Radiology, Yantai Yuhuangding Hospital, Qingdao University, Yantai, Shandong, 264000, China
| | - Cong Xu
- Physical Examination Center, Yantai Yuhuangding Hospital, Qingdao University, Yantai, Shandong, 264000, China
| | - Haicheng Zhang
- Big Data and Artificial Intelligence Laboratory, Yantai Yuhuangding Hospital, Qingdao University, Yantai, Shandong, 264000, China; Shandong Provincial Key Medical and Health Laboratory of Intelligent Diagnosis and Treatment for Women's Diseases, Yantai Yuhuangding Hospital, Qingdao University, Yantai, Shandong, 264000, China
| | - Qingliang Niu
- Weifang NO.2 People's Hospital, Weifang, Shandong, 261041, China.
| | - Heng Ma
- Department of Radiology, Yantai Yuhuangding Hospital, Qingdao University, Yantai, Shandong, 264000, China.
| | - Ning Mao
- Department of Radiology, Yantai Yuhuangding Hospital, Qingdao University, Yantai, Shandong, 264000, China.
| |
Collapse
|
12
|
Alzubaidi L, Al-Dulaimi K, Salhi A, Alammar Z, Fadhel MA, Albahri AS, Alamoodi AH, Albahri OS, Hasan AF, Bai J, Gilliland L, Peng J, Branni M, Shuker T, Cutbush K, Santamaría J, Moreira C, Ouyang C, Duan Y, Manoufali M, Jomaa M, Gupta A, Abbosh A, Gu Y. Comprehensive review of deep learning in orthopaedics: Applications, challenges, trustworthiness, and fusion. Artif Intell Med 2024; 155:102935. [PMID: 39079201 DOI: 10.1016/j.artmed.2024.102935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 03/18/2024] [Accepted: 07/22/2024] [Indexed: 08/24/2024]
Abstract
Deep learning (DL) in orthopaedics has gained significant attention in recent years. Previous studies have shown that DL can be applied to a wide variety of orthopaedic tasks, including fracture detection, bone tumour diagnosis, implant recognition, and evaluation of osteoarthritis severity. The utilisation of DL is expected to increase, owing to its ability to present accurate diagnoses more efficiently than traditional methods in many scenarios. This reduces the time and cost of diagnosis for patients and orthopaedic surgeons. To our knowledge, no exclusive study has comprehensively reviewed all aspects of DL currently used in orthopaedic practice. This review addresses this knowledge gap using articles from Science Direct, Scopus, IEEE Xplore, and Web of Science between 2017 and 2023. The authors begin with the motivation for using DL in orthopaedics, including its ability to enhance diagnosis and treatment planning. The review then covers various applications of DL in orthopaedics, including fracture detection, detection of supraspinatus tears using MRI, osteoarthritis, prediction of types of arthroplasty implants, bone age assessment, and detection of joint-specific soft tissue disease. We also examine the challenges for implementing DL in orthopaedics, including the scarcity of data to train DL and the lack of interpretability, as well as possible solutions to these common pitfalls. Our work highlights the requirements to achieve trustworthiness in the outcomes generated by DL, including the need for accuracy, explainability, and fairness in the DL models. We pay particular attention to fusion techniques as one of the ways to increase trustworthiness, which have also been used to address the common multimodality in orthopaedics. Finally, we have reviewed the approval requirements set forth by the US Food and Drug Administration to enable the use of DL applications. As such, we aim to have this review function as a guide for researchers to develop a reliable DL application for orthopaedic tasks from scratch for use in the market.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia.
| | - Khamael Al-Dulaimi
- Computer Science Department, College of Science, Al-Nahrain University, Baghdad, Baghdad 10011, Iraq; School of Electrical Engineering and Robotics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Asma Salhi
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Zaenab Alammar
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Mohammed A Fadhel
- Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - A S Albahri
- Technical College, Imam Ja'afar Al-Sadiq University, Baghdad, Iraq
| | - A H Alamoodi
- Institute of Informatics and Computing in Energy, Universiti Tenaga Nasional, Kajang 43000, Malaysia
| | - O S Albahri
- Australian Technical and Management College, Melbourne, Australia
| | - Amjad F Hasan
- Faculty of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
| | - Jinshuai Bai
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Luke Gilliland
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Jing Peng
- Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Marco Branni
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Tristan Shuker
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; St Andrew's War Memorial Hospital, Brisbane, QLD 4000, Australia
| | - Kenneth Cutbush
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; St Andrew's War Memorial Hospital, Brisbane, QLD 4000, Australia
| | - Jose Santamaría
- Department of Computer Science, University of Jaén, Jaén 23071, Spain
| | - Catarina Moreira
- Data Science Institute, University of Technology Sydney, Australia
| | - Chun Ouyang
- School of Information Systems, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | - Ye Duan
- School of Computing, Clemson University, Clemson, 29631, SC, USA
| | - Mohamed Manoufali
- CSIRO, Kensington, WA 6151, Australia; School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, QLD 4067, Australia
| | - Mohammad Jomaa
- QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; St Andrew's War Memorial Hospital, Brisbane, QLD 4000, Australia
| | - Ashish Gupta
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia; Research and Development department, Akunah Med Technology Pty Ltd Co, Brisbane, QLD 4120, Australia
| | - Amin Abbosh
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, QLD 4067, Australia
| | - Yuantong Gu
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia; QUASR/ARC Industrial Transformation Training Centre-Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| |
Collapse
|
13
|
Srinivasu PN, Ahmed S, Hassaballah M, Almusallam N. An explainable Artificial Intelligence software system for predicting diabetes. Heliyon 2024; 10:e36112. [PMID: 39253141 PMCID: PMC11381601 DOI: 10.1016/j.heliyon.2024.e36112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Revised: 08/09/2024] [Accepted: 08/09/2024] [Indexed: 09/11/2024] Open
Abstract
Implementing diabetes surveillance systems is paramount to mitigate the risk of incurring substantial medical expenses. Currently, blood glucose is measured by minimally invasive methods, which involve extracting a small blood sample and transmitting it to a blood glucose meter. This method is deemed discomforting for individuals who are undergoing it. The present study introduces an Explainable Artificial Intelligence (XAI) system, which aims to create an intelligible machine capable of explaining expected outcomes and decision models. To this end, we analyze abnormal glucose levels by utilizing Bi-directional Long Short-Term Memory (Bi-LSTM) and Convolutional Neural Network (CNN). In this regard, the glucose levels are acquired through the glucose oxidase (GOD) strips placed over the human body. Later, the signal data is converted to the spectrogram images, classified as low glucose, average glucose, and abnormal glucose levels. The labeled spectrogram images are then used to train the individualized monitoring model. The proposed XAI model to track real-time glucose levels uses the XAI-driven architecture in its feature processing. The model's effectiveness is evaluated by analyzing the performance of the proposed model and several evolutionary metrics used in the confusion matrix. The data revealed in the study demonstrate that the proposed model effectively identifies individuals with elevated glucose levels.
Collapse
Affiliation(s)
- Parvathaneni Naga Srinivasu
- Department of Teleinformatics Engineering, Federal University of Ceará, Fortaleza, 60455-970, Brazil
- Amrita School of Computing, Amrita Vishwa Vidyapeetham, Amaravati, 522503, Andhra Pradesh, India
| | - Shakeel Ahmed
- Department of Computer Science, College of Computer Sciences and Information Technology, King Faisal University, Al-Ahsa, 31982, Saudi Arabia
| | - Mahmoud Hassaballah
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj, 16278, Saudi Arabia
- Department of Computer Science, Faculty of Computers and Information, South Valley University, Qena, Egypt
| | - Naif Almusallam
- Department of Management Information Systems, College of Business Administration, King Faisal University, Al-Ahsa, 31982, Saudi Arabia
| |
Collapse
|
14
|
Williams MC, Weir-McCall JR, Baldassarre LA, De Cecco CN, Choi AD, Dey D, Dweck MR, Isgum I, Kolossvary M, Leipsic J, Lin A, Lu MT, Motwani M, Nieman K, Shaw L, van Assen M, Nicol E. Artificial intelligence and machine learning for cardiovascular computed tomography (CCT): A white paper of the society of cardiovascular computed tomography (SCCT). J Cardiovasc Comput Tomogr 2024:S1934-5925(24)00405-2. [PMID: 39214777 DOI: 10.1016/j.jcct.2024.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Revised: 08/03/2024] [Accepted: 08/05/2024] [Indexed: 09/04/2024]
Affiliation(s)
| | | | - Lauren A Baldassarre
- Section of Cardiovascular Medicine and Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | - Andrew D Choi
- The George Washington University School of Medicine, Washington, USA
| | - Damini Dey
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, USA
| | - Marc R Dweck
- Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
| | - Ivana Isgum
- Amsterdam University Medical Center, University of Amsterdam, Netherlands
| | - Márton Kolossvary
- Gottsegen National Cardiovascular Center, Budapest, Hungary, and Physiological Controls Research Center, University Research and Innovation Center, Óbuda University, Budapest, Hungary
| | | | - Andrew Lin
- Victorian Heart Institute and Monash Health Heart, Victorian Heart Hospital, Monash University, Australia
| | - Michael T Lu
- Massachusetts General Hospital Cardiovascular Imaging Research Center/Harvard Medical School, USA
| | | | | | - Leslee Shaw
- Icahn School of Medicine at Mount Sinai, New York, USA
| | | | - Edward Nicol
- Royal Brompton Hospital, Guys and St Thomas' NHS Foundation Trust, London, UK; School of Biomedical Engineering and Imaging Sciences, King's College London, UK
| |
Collapse
|
15
|
Jahani A, Jahani I, Khadem A, Braden BB, Delrobaei M, MacIntosh BJ. Twinned neuroimaging analysis contributes to improving the classification of young people with autism spectrum disorder. Sci Rep 2024; 14:20120. [PMID: 39209988 PMCID: PMC11362281 DOI: 10.1038/s41598-024-71174-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Accepted: 08/26/2024] [Indexed: 09/04/2024] Open
Abstract
Autism spectrum disorder (ASD) is diagnosed using comprehensive behavioral information. Neuroimaging offers additional information but lacks clinical utility for diagnosis. This study investigates whether multi-forms of magnetic resonance imaging (MRI) contrast can be used individually and in combination to produce a categorical classification of young individuals with ASD. MRI data were accessed from the Autism Brain Imaging Data Exchange (ABIDE). Young participants (ages 2-30) were selected, and two group cohorts consisted of 702 participants: 351 ASD and 351 controls. Image-based classification was performed using one-channel and two-channel inputs to 3D-DenseNet deep learning networks. The models were trained and tested using tenfold cross-validation. Two-channel models were twinned with combinations of structural MRI (sMRI) maps and amplitude of low-frequency fluctuations (ALFF) or fractional ALFF (fALFF) maps from resting-state functional MRI (rs-fMRI). All models produced classification accuracy that exceeded 65.1%. The two-channel ALFF-sMRI model achieved the highest mean accuracy of 76.9% ± 2.34. The one-channel ALFF-based model alone had mean accuracy of 72% ± 3.1. This study leveraged the ABIDE dataset to produce ASD classification results that are comparable and/or exceed literature values. The deep learning approach was conducive to diverse neuroimaging inputs. Findings reveal that the ALFF-sMRI two-channel model outperformed all others.
Collapse
Affiliation(s)
- Ali Jahani
- Department of Biomedical Engineering, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | - Iman Jahani
- Department of Biomedical Engineering, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | - Ali Khadem
- Department of Biomedical Engineering, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran.
| | - B Blair Braden
- College of Health Solutions, Arizona State University, Phoenix, AZ, USA
| | - Mehdi Delrobaei
- Department of Biomedical Engineering, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
- Department of Electrical and Computer Engineering, Western University, London, ON, Canada
| | - Bradley J MacIntosh
- Hurvitz Brain Sciences, Sandra Black Centre for Brain Resilience and Recovery, Sunnybrook Research Institute, Toronto, ON, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
- Centre for Youth Bipolar Disorder, Centre for Addiction and Mental Health, Toronto, Canada
- Computational Radiology and Artificial Intelligence Unit, Departments of Physics and Computational Radiology, Oslo University Hospital, Oslo, Norway
| |
Collapse
|
16
|
Zhao J, Liu J, Wang S, Zhang P, Yu W, Yang C, Zhang Y, Chen Y. PIAA: Pre-imaging all-round assistant for digital radiography. Technol Health Care 2024:THC240639. [PMID: 39240596 DOI: 10.3233/thc-240639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2024]
Abstract
BACKGROUND In radiography procedures, radiographers' suboptimal positioning and exposure parameter settings may necessitate image retakes, subjecting patients to unnecessary ionizing radiation exposure. Reducing retakes is crucial to minimize patient X-ray exposure and conserve medical resources. OBJECTIVE We propose a Digital Radiography (DR) Pre-imaging All-round Assistant (PIAA) that leverages Artificial Intelligence (AI) technology to enhance traditional DR. METHODS PIAA consists of an RGB-Depth (RGB-D) multi-camera array, an embedded computing platform, and multiple software components. It features an Adaptive RGB-D Image Acquisition (ARDIA) module that automatically selects the appropriate RGB camera based on the distance between the cameras and patients. It includes a 2.5D Selective Skeletal Keypoints Estimation (2.5D-SSKE) module that fuses depth information with 2D keypoints to estimate the pose of target body parts. Thirdly, it also uses a Domain expertise (DE) embedded Full-body Exposure Parameter Estimation (DFEPE) module that combines 2.5D-SSKE and DE to accurately estimate parameters for full-body DR views. RESULTS Optimizes DR workflow, significantly enhancing operational efficiency. The average time required for positioning patients and preparing exposure parameters was reduced from 73 seconds to 8 seconds. CONCLUSIONS PIAA shows significant promise for extension to full-body examinations.
Collapse
Affiliation(s)
- Jie Zhao
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
- Careray Digital Medical Technology Co., Ltd., Suzhou, China
| | - Jianqiang Liu
- Careray Digital Medical Technology Co., Ltd., Suzhou, China
| | - Shijie Wang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Pinzheng Zhang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Wenxue Yu
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Chunfeng Yang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Yudong Zhang
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Yang Chen
- Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, Nanjing, China
| |
Collapse
|
17
|
Lamprou V, Kallipolitis A, Maglogiannis I. On the evaluation of deep learning interpretability methods for medical images under the scope of faithfulness. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 253:108238. [PMID: 38823117 DOI: 10.1016/j.cmpb.2024.108238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 05/01/2024] [Accepted: 05/21/2024] [Indexed: 06/03/2024]
Abstract
BACKGROUND AND OBJECTIVE Evaluating the interpretability of Deep Learning models is crucial for building trust and gaining insights into their decision-making processes. In this work, we employ class activation map based attribution methods in a setting where only High-Resolution Class Activation Mapping (HiResCAM) is known to produce faithful explanations. The objective is to evaluate the quality of the attribution maps using quantitative metrics and investigate whether faithfulness aligns with the metrics results. METHODS We fine-tune pre-trained deep learning architectures over four medical image datasets in order to calculate attribution maps. The maps are evaluated on a threefold metrics basis utilizing well-established evaluation scores. RESULTS Our experimental findings suggest that the Area Over Perturbation Curve (AOPC) and Max-Sensitivity scores favor the HiResCAM maps. On the other hand, the Heatmap Assisted Accuracy Score (HAAS) does not provide insights to our comparison as it evaluates almost all maps as inaccurate. To this purpose we further compare our calculated values against values obtained over a diverse group of models which are trained on non-medical benchmark datasets, to eventually achieve more responsive results. CONCLUSION This study develops a series of experiments to discuss the connection between faithfulness and quantitative metrics over medical attribution maps. HiResCAM preserves the gradient effect on a pixel level ultimately producing high-resolution, informative and resilient mappings. In turn, this is depicted in the results of AOPC and Max-Sensitivity metrics, successfully identifying the faithful algorithm. In regards to HAAS, our experiments yield that it is sensitive over complex medical patterns, commonly characterized by strong color dependency and multiple attention areas.
Collapse
Affiliation(s)
- Vangelis Lamprou
- Department of Digital Systems, University of Piraeus, 80, M. Karaoli & A. Dimitriou St, Piraeus 18534, Greece
| | - Athanasios Kallipolitis
- Department of Digital Systems, University of Piraeus, 80, M. Karaoli & A. Dimitriou St, Piraeus 18534, Greece.
| | - Ilias Maglogiannis
- Department of Digital Systems, University of Piraeus, 80, M. Karaoli & A. Dimitriou St, Piraeus 18534, Greece
| |
Collapse
|
18
|
Yuan H, Hong C, Jiang PT, Zhao G, Tran NTA, Xu X, Yan YY, Liu N. Clinical domain knowledge-derived template improves post hoc AI explanations in pneumothorax classification. J Biomed Inform 2024; 156:104673. [PMID: 38862083 DOI: 10.1016/j.jbi.2024.104673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 06/01/2024] [Accepted: 06/07/2024] [Indexed: 06/13/2024]
Abstract
OBJECTIVE Pneumothorax is an acute thoracic disease caused by abnormal air collection between the lungs and chest wall. Recently, artificial intelligence (AI), especially deep learning (DL), has been increasingly employed for automating the diagnostic process of pneumothorax. To address the opaqueness often associated with DL models, explainable artificial intelligence (XAI) methods have been introduced to outline regions related to pneumothorax. However, these explanations sometimes diverge from actual lesion areas, highlighting the need for further improvement. METHOD We propose a template-guided approach to incorporate the clinical knowledge of pneumothorax into model explanations generated by XAI methods, thereby enhancing the quality of the explanations. Utilizing one lesion delineation created by radiologists, our approach first generates a template that represents potential areas of pneumothorax occurrence. This template is then superimposed on model explanations to filter out extraneous explanations that fall outside the template's boundaries. To validate its efficacy, we carried out a comparative analysis of three XAI methods (Saliency Map, Grad-CAM, and Integrated Gradients) with and without our template guidance when explaining two DL models (VGG-19 and ResNet-50) in two real-world datasets (SIIM-ACR and ChestX-Det). RESULTS The proposed approach consistently improved baseline XAI methods across twelve benchmark scenarios built on three XAI methods, two DL models, and two datasets. The average incremental percentages, calculated by the performance improvements over the baseline performance, were 97.8% in Intersection over Union (IoU) and 94.1% in Dice Similarity Coefficient (DSC) when comparing model explanations and ground-truth lesion areas. We further visualized baseline and template-guided model explanations on radiographs to showcase the performance of our approach. CONCLUSIONS In the context of pneumothorax diagnoses, we proposed a template-guided approach for improving model explanations. Our approach not only aligns model explanations more closely with clinical insights but also exhibits extensibility to other thoracic diseases. We anticipate that our template guidance will forge a novel approach to elucidating AI models by integrating clinical domain expertise.
Collapse
Affiliation(s)
- Han Yuan
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore
| | - Chuan Hong
- Department of Biostatistics and Bioinformatics, Duke University, USA
| | | | - Gangming Zhao
- Faculty of Engineering, The University of Hong Kong, China
| | | | - Xinxing Xu
- Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore
| | - Yet Yen Yan
- Department of Radiology, Changi General Hospital, Singapore
| | - Nan Liu
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore; Programme in Health Services and Systems Research, Duke-NUS Medical School, Singapore; Institute of Data Science, National University of Singapore, Singapore.
| |
Collapse
|
19
|
Iacucci M, Santacroce G, Zammarchi I, Maeda Y, Del Amor R, Meseguer P, Kolawole BB, Chaudhari U, Di Sabatino A, Danese S, Mori Y, Grisan E, Naranjo V, Ghosh S. Artificial intelligence and endo-histo-omics: new dimensions of precision endoscopy and histology in inflammatory bowel disease. Lancet Gastroenterol Hepatol 2024; 9:758-772. [PMID: 38759661 DOI: 10.1016/s2468-1253(24)00053-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 02/16/2024] [Accepted: 02/23/2024] [Indexed: 05/19/2024]
Abstract
Integrating artificial intelligence into inflammatory bowel disease (IBD) has the potential to revolutionise clinical practice and research. Artificial intelligence harnesses advanced algorithms to deliver accurate assessments of IBD endoscopy and histology, offering precise evaluations of disease activity, standardised scoring, and outcome prediction. Furthermore, artificial intelligence offers the potential for a holistic endo-histo-omics approach by interlacing and harmonising endoscopy, histology, and omics data towards precision medicine. The emerging applications of artificial intelligence could pave the way for personalised medicine in IBD, offering patient stratification for the most beneficial therapy with minimal risk. Although artificial intelligence holds promise, challenges remain, including data quality, standardisation, reproducibility, scarcity of randomised controlled trials, clinical implementation, ethical concerns, legal liability, and regulatory issues. The development of standardised guidelines and interdisciplinary collaboration, including policy makers and regulatory agencies, is crucial for addressing these challenges and advancing artificial intelligence in IBD clinical practice and trials.
Collapse
Affiliation(s)
- Marietta Iacucci
- APC Microbiome Ireland, College of Medicine and Health, University College of Cork, Cork, Ireland.
| | - Giovanni Santacroce
- APC Microbiome Ireland, College of Medicine and Health, University College of Cork, Cork, Ireland
| | - Irene Zammarchi
- APC Microbiome Ireland, College of Medicine and Health, University College of Cork, Cork, Ireland
| | - Yasuharu Maeda
- APC Microbiome Ireland, College of Medicine and Health, University College of Cork, Cork, Ireland
| | - Rocío Del Amor
- Instituto de Investigación e Innovación en Bioingeniería, HUMAN-tech, Universitat Politècnica de València, València, Spain
| | - Pablo Meseguer
- Instituto de Investigación e Innovación en Bioingeniería, HUMAN-tech, Universitat Politècnica de València, València, Spain; Valencian Graduate School and Research Network of Artificial Intelligence, Valencia, Spain
| | | | | | - Antonio Di Sabatino
- Department of Internal Medicine and Medical Therapeutics, University of Pavia, Pavia, Italy; First Department of Internal Medicine, San Matteo Hospital Foundation, Pavia, Italy
| | - Silvio Danese
- Gastroenterology and Endoscopy, IRCCS Ospedale San Raffaele and University Vita-Salute San Raffaele, Milan, Italy
| | - Yuichi Mori
- Clinical Effectiveness Research Group, University of Oslo, Oslo, Norway; Digestive Disease Center, Showa University Northern Yokohama Hospital, Yokohama, Japan
| | - Enrico Grisan
- School of Engineering, London South Bank University, London, UK
| | - Valery Naranjo
- Instituto de Investigación e Innovación en Bioingeniería, HUMAN-tech, Universitat Politècnica de València, València, Spain
| | - Subrata Ghosh
- APC Microbiome Ireland, College of Medicine and Health, University College of Cork, Cork, Ireland
| |
Collapse
|
20
|
Yasin P, Yimit Y, Cai X, Aimaiti A, Sheng W, Mamat M, Nijiati M. Machine learning-enabled prediction of prolonged length of stay in hospital after surgery for tuberculosis spondylitis patients with unbalanced data: a novel approach using explainable artificial intelligence (XAI). Eur J Med Res 2024; 29:383. [PMID: 39054495 PMCID: PMC11270948 DOI: 10.1186/s40001-024-01988-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2023] [Accepted: 07/18/2024] [Indexed: 07/27/2024] Open
Abstract
BACKGROUND Tuberculosis spondylitis (TS), commonly known as Pott's disease, is a severe type of skeletal tuberculosis that typically requires surgical treatment. However, this treatment option has led to an increase in healthcare costs due to prolonged hospital stays (PLOS). Therefore, identifying risk factors associated with extended PLOS is necessary. In this research, we intended to develop an interpretable machine learning model that could predict extended PLOS, which can provide valuable insights for treatments and a web-based application was implemented. METHODS We obtained patient data from the spine surgery department at our hospital. Extended postoperative length of stay (PLOS) refers to a hospitalization duration equal to or exceeding the 75th percentile following spine surgery. To identify relevant variables, we employed several approaches, such as the least absolute shrinkage and selection operator (LASSO), recursive feature elimination (RFE) based on support vector machine classification (SVC), correlation analysis, and permutation importance value. Several models using implemented and some of them are ensembled using soft voting techniques. Models were constructed using grid search with nested cross-validation. The performance of each algorithm was assessed through various metrics, including the AUC value (area under the curve of receiver operating characteristics) and the Brier Score. Model interpretation involved utilizing methods such as Shapley additive explanations (SHAP), the Gini Impurity Index, permutation importance, and local interpretable model-agnostic explanations (LIME). Furthermore, to facilitate the practical application of the model, a web-based interface was developed and deployed. RESULTS The study included a cohort of 580 patients and 11 features include (CRP, transfusions, infusion volume, blood loss, X-ray bone bridge, X-ray osteophyte, CT-vertebral destruction, CT-paravertebral abscess, MRI-paravertebral abscess, MRI-epidural abscess, postoperative drainage) were selected. Most of the classifiers showed better performance, where the XGBoost model has a higher AUC value (0.86) and lower Brier Score (0.126). The XGBoost model was chosen as the optimal model. The results obtained from the calibration and decision curve analysis (DCA) plots demonstrate that XGBoost has achieved promising performance. After conducting tenfold cross-validation, the XGBoost model demonstrated a mean AUC of 0.85 ± 0.09. SHAP and LIME were used to display the variables' contributions to the predicted value. The stacked bar plots indicated that infusion volume was the primary contributor, as determined by Gini, permutation importance (PFI), and the LIME algorithm. CONCLUSIONS Our methods not only effectively predicted extended PLOS but also identified risk factors that can be utilized for future treatments. The XGBoost model developed in this study is easily accessible through the deployed web application and can aid in clinical research.
Collapse
Affiliation(s)
- Parhat Yasin
- Department of Spine Surgery, The Sixth Affiliated Hospital of Xinjiang Medical University, Urumqi, 830000, Xinjiang, People's Republic of China
- Department of Spine Surgery, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, 830054, Xinjiang, People's Republic of China
| | - Yasen Yimit
- Department of Radiology, The First People's Hospital of Kashi Prefecture, Kashi, 844000, Xinjiang, People's Republic of China
| | - Xiaoyu Cai
- Department of Spine Surgery, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, 830054, Xinjiang, People's Republic of China
| | - Abasi Aimaiti
- Department of Anesthesiology, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, 830054, Xinjiang, People's Republic of China
| | - Weibin Sheng
- Department of Spine Surgery, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, 830054, Xinjiang, People's Republic of China
| | - Mardan Mamat
- Department of Spine Surgery, The First Affiliated Hospital of Xinjiang Medical University, Urumqi, 830054, Xinjiang, People's Republic of China.
| | - Mayidili Nijiati
- Department of Radiology, The Fourth Affiliated Hospital of Xinjiang Medical University(Xinjiang Hospital of Traditional Chinese Medicine), Urumqi, 830002, Xinjiang, People's Republic of China.
- Xinjiang Key Laboratory of Artificial Intelligence Assisted Imaging Diagnosis, Kashi, 844000, Xinjiang, People's Republic of China.
| |
Collapse
|
21
|
Gomez C, Smith BL, Zayas A, Unberath M, Canares T. Explainable AI decision support improves accuracy during telehealth strep throat screening. COMMUNICATIONS MEDICINE 2024; 4:149. [PMID: 39048726 PMCID: PMC11269612 DOI: 10.1038/s43856-024-00568-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Accepted: 07/04/2024] [Indexed: 07/27/2024] Open
Abstract
BACKGROUND Artificial intelligence-based (AI) clinical decision support systems (CDSS) using unconventional data, like smartphone-acquired images, promise transformational opportunities for telehealth; including remote diagnosis. Although such solutions' potential remains largely untapped, providers' trust and understanding are vital for effective adoption. This study examines how different human-AI interaction paradigms affect clinicians' responses to an emerging AI CDSS for streptococcal pharyngitis (strep throat) detection from smartphone throat images. METHODS In a randomized experiment, we tested explainable AI strategies using three AI-based CDSS prototypes for strep throat prediction. Participants received clinical vignettes via an online survey to predict the disease state and offer clinical recommendations. The first set included a validated CDSS prediction (Modified Centor Score) and the second introduced an explainable AI prototype randomly. We used linear models to assess explainable AI's effect on clinicians' accuracy, confirmatory testing rates, and perceived trust and understanding of the CDSS. RESULTS The study, involving 121 telehealth providers, shows that compared to using the Centor Score, AI-based CDSS can improve clinicians' predictions. Despite higher agreement with AI, participants report lower trust in its advice than in the Centor Score, leading to more requests for in-person confirmatory testing. CONCLUSIONS Effectively integrating AI is crucial in the telehealth-based diagnosis of infectious diseases, given the implications of antibiotic over-prescriptions. We demonstrate that AI-based CDSS can improve the accuracy of remote strep throat screening yet underscores the necessity to enhance human-machine collaboration, particularly in trust and intelligibility. This ensures providers and patients can capitalize on AI interventions and smartphones for virtual healthcare.
Collapse
Affiliation(s)
- Catalina Gomez
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | | | - Alisa Zayas
- Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
- Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| | - Therese Canares
- Division of Pediatric Emergency Medicine, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
22
|
Liu W, Zhang B, Liu T, Jiang J, Liu Y. Artificial Intelligence in Pancreatic Image Analysis: A Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:4749. [PMID: 39066145 PMCID: PMC11280964 DOI: 10.3390/s24144749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 07/15/2024] [Accepted: 07/16/2024] [Indexed: 07/28/2024]
Abstract
Pancreatic cancer is a highly lethal disease with a poor prognosis. Its early diagnosis and accurate treatment mainly rely on medical imaging, so accurate medical image analysis is especially vital for pancreatic cancer patients. However, medical image analysis of pancreatic cancer is facing challenges due to ambiguous symptoms, high misdiagnosis rates, and significant financial costs. Artificial intelligence (AI) offers a promising solution by relieving medical personnel's workload, improving clinical decision-making, and reducing patient costs. This study focuses on AI applications such as segmentation, classification, object detection, and prognosis prediction across five types of medical imaging: CT, MRI, EUS, PET, and pathological images, as well as integrating these imaging modalities to boost diagnostic accuracy and treatment efficiency. In addition, this study discusses current hot topics and future directions aimed at overcoming the challenges in AI-enabled automated pancreatic cancer diagnosis algorithms.
Collapse
Affiliation(s)
- Weixuan Liu
- Sydney Smart Technology College, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China; (W.L.); (B.Z.)
| | - Bairui Zhang
- Sydney Smart Technology College, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China; (W.L.); (B.Z.)
| | - Tao Liu
- School of Mathematics and Statistics, Northeastern University at Qinhuangdao, Qinhuangdao 066004, China;
| | - Juntao Jiang
- College of Control Science and Engineering, Zhejiang University, Hangzhou 310058, China
| | - Yong Liu
- College of Control Science and Engineering, Zhejiang University, Hangzhou 310058, China
| |
Collapse
|
23
|
Guha S, Kodipalli A, Fernandes SL, Dasar S. Explainable AI for Interpretation of Ovarian Tumor Classification Using Enhanced ResNet50. Diagnostics (Basel) 2024; 14:1567. [PMID: 39061704 PMCID: PMC11276149 DOI: 10.3390/diagnostics14141567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2024] [Revised: 07/07/2024] [Accepted: 07/10/2024] [Indexed: 07/28/2024] Open
Abstract
Deep learning architectures like ResNet and Inception have produced accurate predictions for classifying benign and malignant tumors in the healthcare domain. This enables healthcare institutions to make data-driven decisions and potentially enable early detection of malignancy by employing computer-vision-based deep learning algorithms. These CNN algorithms, in addition to requiring huge amounts of data, can identify higher- and lower-level features that are significant while classifying tumors into benign or malignant. However, the existing literature is limited in terms of the explainability of the resultant classification, and identifying the exact features that are of importance, which is essential in the decision-making process for healthcare practitioners. Thus, the motivation of this work is to implement a custom classifier on the ovarian tumor dataset, which exhibits high classification performance and subsequently interpret the classification results qualitatively, using various Explainable AI methods, to identify which pixels or regions of interest are given highest importance by the model for classification. The dataset comprises CT scanned images of ovarian tumors taken from to the axial, saggital and coronal planes. State-of-the-art architectures, including a modified ResNet50 derived from the standard pre-trained ResNet50, are implemented in the paper. When compared to the existing state-of-the-art techniques, the proposed modified ResNet50 exhibited a classification accuracy of 97.5 % on the test dataset without increasing the the complexity of the architecture. The results then were carried for interpretation using several explainable AI techniques. The results show that the shape and localized nature of the tumors play important roles for qualitatively determining the ability of the tumor to metastasize and thereafter to be classified as benign or malignant.
Collapse
Affiliation(s)
- Srirupa Guha
- Department of Computer Science and Engineering, National Institute of Technology Durgapur, Durgapur 713209, India
| | - Ashwini Kodipalli
- Department of Artificial Intelligence and Data Science, Global Academy of Technology, Bengaluru 560098, India;
| | - Steven L. Fernandes
- Department of Computer Science, Design, Journalism, Creighton University, Omaha, NE 68178, USA
| | - Santosh Dasar
- Department of Radiology, SDM College of Medical Sciences Dharwad, Dharwad 580009, India;
| |
Collapse
|
24
|
Brusini L, Cruciani F, Dall’Aglio G, Zajac T, Boscolo Galazzo I, Zucchelli M, Menegaz G. XAI-Based Assessment of the AMURA Model for Detecting Amyloid-β and Tau Microstructural Signatures in Alzheimer's Disease. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2024; 12:569-579. [PMID: 39155922 PMCID: PMC11329216 DOI: 10.1109/jtehm.2024.3430035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 05/24/2024] [Accepted: 07/08/2024] [Indexed: 08/20/2024]
Abstract
Brain microstructural changes already occur in the earliest phases of Alzheimer's disease (AD) as evidenced in diffusion magnetic resonance imaging (dMRI) literature. This study investigates the potential of the novel dMRI Apparent Measures Using Reduced Acquisitions (AMURA) as imaging markers for capturing such tissue modifications.Tract-based spatial statistics (TBSS) and support vector machines (SVMs) based on different measures were exploited to distinguish between amyloid-beta/tau negative (A[Formula: see text]-/tau-) and A[Formula: see text]+/tau+ or A[Formula: see text]+/tau- subjects. Moreover, eXplainable Artificial Intelligence (XAI) was used to highlight the most influential features in the SVMs classifications and to validate the results by seeing the explanations' recurrence across different methods.TBSS analysis revealed significant differences between A[Formula: see text]-/tau- and other groups in line with the literature. The best SVM classification performance reached an accuracy of 0.73 by using advanced measures compared to more standard ones. Moreover, the explainability analysis suggested the results' stability and the central role of the cingulum to show early sign of AD.By relying on SVM classification and XAI interpretation of the outcomes, AMURA indices can be considered viable markers for amyloid and tau pathology. Clinical impact: This pre-clinical research revealed AMURA indices as viable imaging markers for timely AD diagnosis by acquiring clinically feasible dMR images, with advantages compared to more invasive methods employed nowadays.
Collapse
Affiliation(s)
- Lorenza Brusini
- Department of Engineering for Innovation MedicineUniversity of VeronaVerona37134Italy
| | - Federica Cruciani
- Department of Engineering for Innovation MedicineUniversity of VeronaVerona37134Italy
| | | | - Tommaso Zajac
- Department of Computer ScienceUniversity of VeronaVerona37134Italy
| | | | - Mauro Zucchelli
- Department of Research and Development Advanced ApplicationsOlea MedicalLa Ciotat13600France
| | - Gloria Menegaz
- Department of Engineering for Innovation MedicineUniversity of VeronaVerona37134Italy
| |
Collapse
|
25
|
Chen MY, Cao MQ, Xu TY. Progress in the application of artificial intelligence in skin wound assessment and prediction of healing time. Am J Transl Res 2024; 16:2765-2776. [PMID: 39114681 PMCID: PMC11301465 DOI: 10.62347/myhe3488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 05/22/2024] [Indexed: 08/10/2024]
Abstract
Since the 1970s, artificial intelligence (AI) has played an increasingly pivotal role in the medical field, enhancing the efficiency of disease diagnosis and treatment. Amidst an aging population and the proliferation of chronic disease, the prevalence of complex surgeries for high-risk multimorbid patients and hard-to-heal wounds has escalated. Healthcare professionals face the challenge of delivering safe and effective care to all patients concurrently. Inadequate management of skin wounds exacerbates the risk of infection and complications, which can obstruct the healing process and diminish patients' quality of life. AI shows substantial promise in revolutionizing wound care and management, thus enhancing the treatment of hospitalized patients and enabling healthcare workers to allocate their time more effectively. This review details the advancements in applying AI for skin wound assessment and the prediction of healing timelines. It emphasizes the use of diverse algorithms to automate and streamline the measurement, classification, and identification of chronic wound healing stages, and to predict wound healing times. Moreover, the review addresses existing limitations and explores future directions.
Collapse
Affiliation(s)
- Ming-Yao Chen
- Department of Anesthetic Pharmacology, School of Anesthesiology, Second Military Medical University/Naval Medical UniversityShanghai 200433, China
| | - Ming-Qi Cao
- Department of Anesthetic Pharmacology, School of Anesthesiology, Second Military Medical University/Naval Medical UniversityShanghai 200433, China
- College of Basic Medicine, Second Military Medical University/Naval Medical UniversityShanghai 200433, China
| | - Tian-Ying Xu
- Department of Anesthetic Pharmacology, School of Anesthesiology, Second Military Medical University/Naval Medical UniversityShanghai 200433, China
| |
Collapse
|
26
|
Ding GY, Tan WM, Lin YP, Ling Y, Huang W, Zhang S, Shi JY, Luo RK, Ji Y, Wang XY, Zhou J, Fan J, Cai MY, Yan B, Gao Q. Mining the interpretable prognostic features from pathological image of intrahepatic cholangiocarcinoma using multi-modal deep learning. BMC Med 2024; 22:282. [PMID: 38972973 PMCID: PMC11229270 DOI: 10.1186/s12916-024-03482-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 06/13/2024] [Indexed: 07/09/2024] Open
Abstract
BACKGROUND The advances in deep learning-based pathological image analysis have invoked tremendous insights into cancer prognostication. Still, lack of interpretability remains a significant barrier to clinical application. METHODS We established an integrative prognostic neural network for intrahepatic cholangiocarcinoma (iCCA), towards a comprehensive evaluation of both architectural and fine-grained information from whole-slide images. Then, leveraging on multi-modal data, we conducted extensive interrogative approaches to the models, to extract and visualize the morphological features that most correlated with clinical outcome and underlying molecular alterations. RESULTS The models were developed and optimized on 373 iCCA patients from our center and demonstrated consistent accuracy and robustness on both internal (n = 213) and external (n = 168) cohorts. The occlusion sensitivity map revealed that the distribution of tertiary lymphoid structures, the geometric traits of the invasive margin, the relative composition of tumor parenchyma and stroma, the extent of necrosis, the presence of the disseminated foci, and the tumor-adjacent micro-vessels were the determining architectural features that impacted on prognosis. Quantifiable morphological vector extracted by CellProfiler demonstrated that tumor nuclei from high-risk patients exhibited significant larger size, more distorted shape, with less prominent nuclear envelope and textural contrast. The multi-omics data (n = 187) further revealed key molecular alterations left morphological imprints that could be attended by the network, including glycolysis, hypoxia, apical junction, mTORC1 signaling, and immune infiltration. CONCLUSIONS We proposed an interpretable deep-learning framework to gain insights into the biological behavior of iCCA. Most of the significant morphological prognosticators perceived by the network are comprehensible to human minds.
Collapse
Affiliation(s)
- Guang-Yu Ding
- Department of Liver Surgery and Transplantation, Liver Cancer Institute, Zhongshan Hospital, and Key Laboratory of Carcinogenesis and Cancer Invasion of Ministry of Education, Fudan University, No.180, Feng Lin Road, Shanghai, 200032, China
| | - Wei-Min Tan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, No.2005, Song Hu Road, Shanghai, 200433, China
| | - You-Pei Lin
- Department of Liver Surgery and Transplantation, Liver Cancer Institute, Zhongshan Hospital, and Key Laboratory of Carcinogenesis and Cancer Invasion of Ministry of Education, Fudan University, No.180, Feng Lin Road, Shanghai, 200032, China
| | - Yu Ling
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, No.2005, Song Hu Road, Shanghai, 200433, China
| | - Wen Huang
- Department of Pathology, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
| | - Shu Zhang
- Department of Liver Surgery and Transplantation, Liver Cancer Institute, Zhongshan Hospital, and Key Laboratory of Carcinogenesis and Cancer Invasion of Ministry of Education, Fudan University, No.180, Feng Lin Road, Shanghai, 200032, China
| | - Jie-Yi Shi
- Department of Liver Surgery and Transplantation, Liver Cancer Institute, Zhongshan Hospital, and Key Laboratory of Carcinogenesis and Cancer Invasion of Ministry of Education, Fudan University, No.180, Feng Lin Road, Shanghai, 200032, China
| | - Rong-Kui Luo
- Department of Pathology, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
| | - Yuan Ji
- Department of Pathology, Zhongshan Hospital, Fudan University, Shanghai, 200032, China
| | - Xiao-Ying Wang
- Department of Liver Surgery and Transplantation, Liver Cancer Institute, Zhongshan Hospital, and Key Laboratory of Carcinogenesis and Cancer Invasion of Ministry of Education, Fudan University, No.180, Feng Lin Road, Shanghai, 200032, China
| | - Jian Zhou
- Department of Liver Surgery and Transplantation, Liver Cancer Institute, Zhongshan Hospital, and Key Laboratory of Carcinogenesis and Cancer Invasion of Ministry of Education, Fudan University, No.180, Feng Lin Road, Shanghai, 200032, China
- Institute of Biomedical Sciences, Fudan University, Shanghai, 200032, China
| | - Jia Fan
- Department of Liver Surgery and Transplantation, Liver Cancer Institute, Zhongshan Hospital, and Key Laboratory of Carcinogenesis and Cancer Invasion of Ministry of Education, Fudan University, No.180, Feng Lin Road, Shanghai, 200032, China
- Institute of Biomedical Sciences, Fudan University, Shanghai, 200032, China
| | - Mu-Yan Cai
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, No.651 Dongfeng Road East, Guangzhou, 510060, China.
| | - Bo Yan
- School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, No.2005, Song Hu Road, Shanghai, 200433, China.
| | - Qiang Gao
- Department of Liver Surgery and Transplantation, Liver Cancer Institute, Zhongshan Hospital, and Key Laboratory of Carcinogenesis and Cancer Invasion of Ministry of Education, Fudan University, No.180, Feng Lin Road, Shanghai, 200032, China.
- Institute of Biomedical Sciences, Fudan University, Shanghai, 200032, China.
- State Key Laboratory of Genetic Engineering, Fudan University, Shanghai, 200433, China.
| |
Collapse
|
27
|
Gryshchuk V, Singh D, Teipel S, Dyrba M. Contrastive Self-supervised Learning for Neurodegenerative Disorder Classification. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.07.03.24309882. [PMID: 39006425 PMCID: PMC11245060 DOI: 10.1101/2024.07.03.24309882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
Neurodegenerative diseases such as Alzheimer's disease (AD) or frontotemporal lobar degeneration (FTLD) involve specific loss of brain volume, detectable in vivo using T1-weighted MRI scans. Supervised machine learning approaches classifying neurodegenerative diseases require diagnostic-labels for each sample. However, it can be difficult to obtain expert labels for a large amount of data. Self-supervised learning (SSL) offers an alternative for training machine learning models without data-labels. We investigated if the SSL models can applied to distinguish between different neurodegenerative disorders in an interpretable manner. Our method comprises a feature extractor and a downstream classification head. A deep convolutional neural network trained in a contrastive self-supervised way serves as the feature extractor, learning latent representation, while the classifier head is a single-layer perceptron. We used N=2694 T1-weighted MRI scans from four data cohorts: two ADNI datasets, AIBL and FTLDNI, including cognitively normal controls (CN), cases with prodromal and clinical AD, as well as FTLD cases differentiated into its sub-types. Our results showed that the feature extractor trained in a self-supervised way provides generalizable and robust representations for the downstream classification. For AD vs. CN, our model achieves 82% balanced accuracy on the test subset and 80% on an independent holdout dataset. Similarly, the Behavioral variant of frontotemporal dementia (BV) vs. CN model attains an 88% balanced accuracy on the test subset. The average feature attribution heatmaps obtained by the Integrated Gradient method highlighted hallmark regions, i.e., temporal gray matter atrophy for AD, and insular atrophy for BV. In conclusion, our models perform comparably to state-of-the-art supervised deep learning approaches. This suggests that the SSL methodology can successfully make use of unannotated neuroimaging datasets as training data while remaining robust and interpretable.
Collapse
|
28
|
Wee NK, Git KA, Lee WJ, Raval G, Pattokhov A, Ho ELM, Chuapetcharasopon C, Tomiyama N, Ng KH, Tan CH. Position Statements of the Emerging Trends Committee of the Asian Oceanian Society of Radiology on the Adoption and Implementation of Artificial Intelligence for Radiology. Korean J Radiol 2024; 25:603-612. [PMID: 38942454 PMCID: PMC11214917 DOI: 10.3348/kjr.2024.0419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 05/12/2024] [Accepted: 05/14/2024] [Indexed: 06/30/2024] Open
Abstract
Artificial intelligence (AI) is rapidly gaining recognition in the radiology domain as a greater number of radiologists are becoming AI-literate. However, the adoption and implementation of AI solutions in clinical settings have been slow, with points of contention. A group of AI users comprising mainly clinical radiologists across various Asian countries, including India, Japan, Malaysia, Singapore, Taiwan, Thailand, and Uzbekistan, formed the working group. This study aimed to draft position statements regarding the application and clinical deployment of AI in radiology. The primary aim is to raise awareness among the general public, promote professional interest and discussion, clarify ethical considerations when implementing AI technology, and engage the radiology profession in the ever-changing clinical practice. These position statements highlight pertinent issues that need to be addressed between care providers and care recipients. More importantly, this will help legalize the use of non-human instruments in clinical deployment without compromising ethical considerations, decision-making precision, and clinical professional standards. We base our study on four main principles of medical care-respect for patient autonomy, beneficence, non-maleficence, and justice.
Collapse
Affiliation(s)
- Nicole Kessa Wee
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, National Healthcare Group, Singapore
| | - Kim-Ann Git
- Department of Diagnostic Radiology, Pantai Hospital, Kuala Lumpur, Malaysia
| | - Wen-Jeng Lee
- Department of Diagnostic Radiology, National Taiwan University Hospital, Taipei, Taiwan
| | - Gaurang Raval
- Department of Diagnostic Radiology, Workhardt Hospitals Limited, Mumbai, India
| | - Aziz Pattokhov
- Faculty of Medicine, Tashkent State Dental Institute, Tashkent, Uzbekistan
| | - Evelyn Lai Ming Ho
- Department of Diagnostic Radiology, ParkCity Medical Centre, Kuala Lumpur, Malaysia
| | | | - Noriyuki Tomiyama
- Department of Diagnostic and Interventional Radiology Suita, Osaka University Hospital, Osaka, Japan
| | - Kwan Hoong Ng
- Department of Biomedical Imaging and University of Malaya Research Imaging Centre, University of Malaya, Kuala Lumpur, Malaysia
- Faculty of Medicine and Health Sciences, UCSI University Springhill Campus, Port Dickson, Malaysia
| | - Cher Heng Tan
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, National Healthcare Group, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore.
| |
Collapse
|
29
|
Zhong J. Deep learning-based diagnostic models for bone lesions: is current research ready for clinical translation? Eur Radiol 2024; 34:4284-4286. [PMID: 38189983 PMCID: PMC11213795 DOI: 10.1007/s00330-023-10555-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 11/05/2023] [Accepted: 11/08/2023] [Indexed: 01/09/2024]
Affiliation(s)
- Jingyu Zhong
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200336, China.
| |
Collapse
|
30
|
Chen J, Chen A, Yang S, Liu J, Xie C, Jiang H. Accuracy of machine learning in preoperative identification of genetic mutation status in lung cancer: A systematic review and meta-analysis. Radiother Oncol 2024; 196:110325. [PMID: 38734145 DOI: 10.1016/j.radonc.2024.110325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 04/12/2024] [Accepted: 04/26/2024] [Indexed: 05/13/2024]
Abstract
BACKGROUND AND PURPOSE We performed this systematic review and meta-analysis to investigate the performance of ML in detecting genetic mutation status in NSCLC patients. MATERIALS AND METHODS We conducted a systematic search of PubMed, Cochrane, Embase, and Web of Science up until July 2023. We discussed the genetic mutation status of EGFR, ALK, KRAS, and BRAF, as well as the mutation status at different sites of EGFR. RESULTS We included a total of 128 original studies, of which 114 constructed ML models based on radiomic features mainly extracted from CT, MRI, and PET-CT data. From a genetic mutation perspective, 121 studies focused on EGFR mutation status analysis. In the validation set, for the detection of EGFR mutation status, the aggregated c-index was 0.760 (95%CI: 0.706-0.814) for clinical feature-based models, 0.772 (95%CI: 0.753-0.791) for CT-based radiomics models, 0.816 (95%CI: 0.776-0.856) for MRI-based radiomics models, and 0.750 (95%CI: 0.712-0.789) for PET-CT-based radiomics models. When combined with clinical features, the aggregated c-index was 0.807 (95%CI: 0.781-0.832) for CT-based radiomics models, 0.806 (95%CI: 0.773-0.839) for MRI-based radiomics models, and 0.822 (95%CI: 0.789-0.854) for PET-CT-based radiomics models. In the validation set, the aggregated c-indexes for radiomics-based models to detect mutation status of ALK and KRAS, as well as the mutation status at different sites of EGFR were all greater than 0.7. CONCLUSION The use of radiomics-based methods for early discrimination of EGFR mutation status in NSCLC demonstrates relatively high accuracy. However, the influence of clinical variables cannot be overlooked in this process. In addition, future studies should also pay attention to the accuracy of radiomics in identifying mutation status of other genes in EGFR.
Collapse
Affiliation(s)
- Jinzhan Chen
- Department of Pulmonary Medicine, Zhongshan Hospital (Xiamen), Fudan University, Xiamen, Fujian 361000, People's Republic of China
| | - Ayun Chen
- Department of Endocrinology, The First Affiliated Hospital of Xiamen University, Xiamen, Fujian 361000, People's Republic of China
| | - Shuwen Yang
- Department of Pulmonary Medicine, Zhongshan Hospital (Xiamen), Fudan University, Xiamen, Fujian 361000, People's Republic of China
| | - Jiaxin Liu
- Department of Pulmonary Medicine, Zhongshan Hospital (Xiamen), Fudan University, Xiamen, Fujian 361000, People's Republic of China
| | - Congyi Xie
- Department of Pulmonary Medicine, Zhongshan Hospital (Xiamen), Fudan University, Xiamen, Fujian 361000, People's Republic of China.
| | - Hongni Jiang
- Department of Pulmonary Medicine, Zhongshan Hospital (Xiamen), Fudan University, Xiamen, Fujian 361000, People's Republic of China.
| |
Collapse
|
31
|
Champendal M, Ribeiro RST, Müller H, Prior JO, Sá Dos Reis C. Nuclear medicine technologists practice impacted by AI denoising applications in PET/CT images. Radiography (Lond) 2024; 30:1232-1239. [PMID: 38917681 DOI: 10.1016/j.radi.2024.06.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 05/24/2024] [Accepted: 06/11/2024] [Indexed: 06/27/2024]
Abstract
PURPOSE Artificial intelligence (AI) in positron emission tomography/computed tomography (PET/CT) can be used to improve image quality when it is useful to reduce the injected activity or the acquisition time. Particular attention must be paid to ensure that users adopt this technological innovation when outcomes can be improved by its use. The aim of this study was to identify the aspects that need to be analysed and discussed to implement an AI denoising PET/CT algorithm in clinical practice, based on the representations of Nuclear Medicine Technologists (NMT) from Western-Switzerland, highlighting the barriers and facilitators associated. METHODS Two focus groups were organised in June and September 2023, involving ten voluntary participants recruited from all types of medical imaging departments, forming a diverse sample of NMT. The interview guide followed the first stage of the revised model of Ottawa of Research Use. A content analysis was performed following the three-stage approach described by Wanlin. Ethics cleared the study. RESULTS Clinical practice, workload, knowledge and resources were de 4 themes identified as necessary to be thought before implementing an AI denoising PET/CT algorithm by ten NMT participants (aged 31-60), not familiar with this AI tool. The main barriers to implement this algorithm included workflow challenges, resistance from professionals and lack of education; while the main facilitators were explanations and the availability of support to ask questions such as a "local champion". CONCLUSION To implement a denoising algorithm in PET/CT, several aspects of clinical practice need to be thought to reduce the barriers to its implementation such as the procedures, the workload and the available resources. Participants emphasised also the importance of clear explanations, education, and support for successful implementation. IMPLICATIONS FOR PRACTICE To facilitate the implementation of AI tools in clinical practice, it is important to identify the barriers and propose strategies that can mitigate it.
Collapse
Affiliation(s)
- M Champendal
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland: Lausanne, CH, Switzerland; Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland.
| | - R S T Ribeiro
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland: Lausanne, CH, Switzerland.
| | - H Müller
- Informatics Institute, University of Applied Sciences Western Switzerland (HES-SO Valais) Sierre, CH, Switzerland; Medical Faculty, University of Geneva, CH, Switzerland.
| | - J O Prior
- Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland; Nuclear Medicine and Molecular Imaging Department, Lausanne University Hospital (CHUV): Lausanne, CH, Switzerland.
| | - C Sá Dos Reis
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland: Lausanne, CH, Switzerland.
| |
Collapse
|
32
|
Boer OD, El Marroun H, Muetzel RL. Adolescent substance use initiation and long-term neurobiological outcomes: insights, challenges and opportunities. Mol Psychiatry 2024; 29:2211-2222. [PMID: 38409597 DOI: 10.1038/s41380-024-02471-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Revised: 01/15/2024] [Accepted: 01/30/2024] [Indexed: 02/28/2024]
Abstract
The increased frequency of risk taking behavior combined with marked neuromaturation has positioned adolescence as a focal point of research into the neural causes and consequences of substance use. However, little work has provided a summary of the links between adolescent initiated substance use and longer-term brain outcomes. Here we review studies exploring the long-term effects of adolescent-initiated substance use with structural and microstructural neuroimaging. A quarter of all studies reviewed conducted repeated neuroimaging assessments. Long-term alcohol use, as well as tobacco use were consistently associated with smaller frontal cortices and altered white matter microstructure. This association was mostly observed in the ACC, insula and subcortical regions in alcohol users, and for the OFC in tobacco users. Long-term cannabis use was mostly related to altered frontal cortices and hippocampal volumes. Interestingly, cannabis users scanned more years after use initiation tended to show smaller measures of these regions, whereas those with fewer years since initiation showed larger measures. Long-term stimulant use tended to show a similar trend as cannabis in terms of years since initiation in measures of the putamen, insula and frontal cortex. Long-term opioid use was mostly associated with smaller subcortical and insular volumes. Of note, null findings were reported in all substance use categories, most often in cannabis use studies. In the context of the large variety in study designs, substance use assessment, methods, and sample characteristics, we provide recommendations on how to interpret these findings, and considerations for future studies.
Collapse
Affiliation(s)
- Olga D Boer
- Department of Psychology, Education and Child Studies - Erasmus School of Social and Behavioral Sciences, Erasmus University Rotterdam, Rotterdam, The Netherlands
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC University Medical Center - Sophia Children's Hospital, Rotterdam, The Netherlands
| | - Hanan El Marroun
- Department of Psychology, Education and Child Studies - Erasmus School of Social and Behavioral Sciences, Erasmus University Rotterdam, Rotterdam, The Netherlands
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC University Medical Center - Sophia Children's Hospital, Rotterdam, The Netherlands
| | - Ryan L Muetzel
- Department of Child and Adolescent Psychiatry/Psychology, Erasmus MC University Medical Center - Sophia Children's Hospital, Rotterdam, The Netherlands.
- Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Center, Rotterdam, The Netherlands.
| |
Collapse
|
33
|
Hermoza R, Nascimento JC, Carneiro G. Weakly-supervised preclinical tumor localization associated with survival prediction from lung cancer screening Chest X-ray images. Comput Med Imaging Graph 2024; 115:102395. [PMID: 38729092 DOI: 10.1016/j.compmedimag.2024.102395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 04/13/2024] [Accepted: 04/30/2024] [Indexed: 05/12/2024]
Abstract
In this paper, we hypothesize that it is possible to localize image regions of preclinical tumors in a Chest X-ray (CXR) image by a weakly-supervised training of a survival prediction model using a dataset containing CXR images of healthy patients and their time-to-death label. These visual explanations can empower clinicians in early lung cancer detection and increase patient awareness of their susceptibility to the disease. To test this hypothesis, we train a censor-aware multi-class survival prediction deep learning classifier that is robust to imbalanced training, where classes represent quantized number of days for time-to-death prediction. Such multi-class model allows us to use post-hoc interpretability methods, such as Grad-CAM, to localize image regions of preclinical tumors. For the experiments, we propose a new benchmark based on the National Lung Cancer Screening Trial (NLST) dataset to test weakly-supervised preclinical tumor localization and survival prediction models, and results suggest that our proposed method shows state-of-the-art C-index survival prediction and weakly-supervised preclinical tumor localization results. To our knowledge, this constitutes a pioneer approach in the field that is able to produce visual explanations of preclinical events associated with survival prediction results.
Collapse
Affiliation(s)
- Renato Hermoza
- Australian Institute for Machine Learning, The University of Adelaide, Australia.
| | - Jacinto C Nascimento
- Institute for Systems and Robotics (ISR/IST), LARSyS, Instituto Superior Técnico, Universidade de Lisboa, Portugal.
| | - Gustavo Carneiro
- Centre for Vision, Speech and Signal Processing (CVSSP), The University of Surrey, UK.
| |
Collapse
|
34
|
Mascagni P, Alapatt D, Sestini L, Yu T, Alfieri S, Morales-Conde S, Padoy N, Perretta S. Applications of artificial intelligence in surgery: clinical, technical, and governance considerations. Cir Esp 2024; 102 Suppl 1:S66-S71. [PMID: 38704146 DOI: 10.1016/j.cireng.2024.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 04/29/2024] [Indexed: 05/06/2024]
Abstract
Artificial intelligence (AI) will power many of the tools in the armamentarium of digital surgeons. AI methods and surgical proof-of-concept flourish, but we have yet to witness clinical translation and value. Here we exemplify the potential of AI in the care pathway of colorectal cancer patients and discuss clinical, technical, and governance considerations of major importance for the safe translation of surgical AI for the benefit of our patients and practices.
Collapse
Affiliation(s)
- Pietro Mascagni
- IHU Strasbourg, Strasbourg, France; Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Università Cattolica del Sacro Cuore, Rome, Italy.
| | - Deepak Alapatt
- University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France
| | - Luca Sestini
- University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France
| | - Tong Yu
- University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France
| | - Sergio Alfieri
- Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy; Università Cattolica del Sacro Cuore, Rome, Italy
| | | | - Nicolas Padoy
- IHU Strasbourg, Strasbourg, France; University of Strasbourg, CNRS, INSERM, ICube, UMR7357, Strasbourg, France
| | - Silvana Perretta
- IHU Strasbourg, Strasbourg, France; IRCAD, Research Institute Against Digestive Cancer, Strasbourg, France; Nouvel Hôpital Civil, Hôpitaux Universitaires de Strasbourg, Strasbourg, France
| |
Collapse
|
35
|
Armato SG, Katz SI, Frauenfelder T, Jayasekera G, Catino A, Blyth KG, Theodoro T, Rousset P, Nackaerts K, Opitz I. Imaging in pleural Mesothelioma: A review of the 16th International Conference of the International Mesothelioma Interest Group. Lung Cancer 2024; 193:107832. [PMID: 38875938 DOI: 10.1016/j.lungcan.2024.107832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Revised: 05/21/2024] [Accepted: 05/27/2024] [Indexed: 06/16/2024]
Abstract
Imaging continues to gain a greater role in the assessment and clinical management of patients with mesothelioma. This communication summarizes the oral presentations from the imaging session at the 2023 International Conference of the International Mesothelioma Interest Group (iMig), which was held in Lille, France from June 26 to 28, 2023. Topics at this session included an overview of best practices for clinical imaging of mesothelioma as reported by an iMig consensus panel, emerging imaging techniques for surgical planning, radiologic assessment of malignant pleural effusion, a radiomics-based transfer learning model to predict patient response to treatment, automated assessment of early contrast enhancement, and tumor thickness for response assessment in peritoneal mesothelioma.
Collapse
Affiliation(s)
- Samuel G Armato
- Department of Radiology, The University of Chicago, Chicago, IL, USA.
| | - Sharyn I Katz
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Thomas Frauenfelder
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Zurich, Switzerland
| | - Geeshath Jayasekera
- Glasgow Pleural Disease Unit, Queen Elizabeth University Hospital, Glasgow, UK and School of Cancer Sciences, University of Glasgow, UK
| | - Annamaria Catino
- Medical Thoracic Oncology Unit, IRCCS Istituto Tumori "Giovanni Paolo II," BARI, Italy
| | - Kevin G Blyth
- Cancer Research UK Scotland Centre, Glasgow, UK and Glasgow Pleural Disease Unit, Queen Elizabeth University Hospital, Glasgow, UK and School of Cancer Sciences, University of Glasgow, UK
| | - Taylla Theodoro
- Institute of Computing, University of Campinas, Campinas, Brazil and Cancer Research UK Scotland Centre, Glasgow, UK
| | - Pascal Rousset
- Department of Radiology, Lyon Sud University Hospital, Hospices Civils de Lyon, Lyon 1 University, Pierre-Bénite, France
| | - Kristiaan Nackaerts
- Department of Pulmonology/Respiratory Oncology, KU Leuven, University Hospitals Leuven, Leuven, Belgium
| | - Isabelle Opitz
- Department of Thoracic Surgery, University Hospital Zurich, Zurich, Switzerland
| |
Collapse
|
36
|
Vakli P, Weiss B, Rozmann D, Erőss G, Nárai Á, Hermann P, Vidnyánszky Z. The effect of head motion on brain age prediction using deep convolutional neural networks. Neuroimage 2024; 294:120646. [PMID: 38750907 DOI: 10.1016/j.neuroimage.2024.120646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 05/10/2024] [Accepted: 05/12/2024] [Indexed: 05/23/2024] Open
Abstract
Deep learning can be used effectively to predict participants' age from brain magnetic resonance imaging (MRI) data, and a growing body of evidence suggests that the difference between predicted and chronological age-referred to as brain-predicted age difference (brain-PAD)-is related to various neurological and neuropsychiatric disease states. A crucial aspect of the applicability of brain-PAD as a biomarker of individual brain health is whether and how brain-predicted age is affected by MR image artifacts commonly encountered in clinical settings. To investigate this issue, we trained and validated two different 3D convolutional neural network architectures (CNNs) from scratch and tested the models on a separate dataset consisting of motion-free and motion-corrupted T1-weighted MRI scans from the same participants, the quality of which were rated by neuroradiologists from a clinical diagnostic point of view. Our results revealed a systematic increase in brain-PAD with worsening image quality for both models. This effect was also observed for images that were deemed usable from a clinical perspective, with brains appearing older in medium than in good quality images. These findings were also supported by significant associations found between the brain-PAD and standard image quality metrics indicating larger brain-PAD for lower-quality images. Our results demonstrate a spurious effect of advanced brain aging as a result of head motion and underline the importance of controlling for image quality when using brain-predicted age based on structural neuroimaging data as a proxy measure for brain health.
Collapse
Affiliation(s)
- Pál Vakli
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Budapest 1117, Hungary.
| | - Béla Weiss
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Budapest 1117, Hungary; Biomatics and Applied Artificial Intelligence Institute, John von Neumann Faculty of Informatics, Óbuda University, Budapest 1034, Hungary.
| | - Dorina Rozmann
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - György Erőss
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Ádám Nárai
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Budapest 1117, Hungary; Doctoral School of Biology and Sportbiology, Institute of Biology, Faculty of Sciences, University of Pécs, Pécs 7624, Hungary
| | - Petra Hermann
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Zoltán Vidnyánszky
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Budapest 1117, Hungary.
| |
Collapse
|
37
|
Zhang J, Fang J, Xu Y, Si G. How AI and Robotics Will Advance Interventional Radiology: Narrative Review and Future Perspectives. Diagnostics (Basel) 2024; 14:1393. [PMID: 39001283 PMCID: PMC11241154 DOI: 10.3390/diagnostics14131393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Revised: 06/20/2024] [Accepted: 06/26/2024] [Indexed: 07/16/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) and robotics has led to significant progress in various medical fields including interventional radiology (IR). This review focuses on the research progress and applications of AI and robotics in IR, including deep learning (DL), machine learning (ML), and convolutional neural networks (CNNs) across specialties such as oncology, neurology, and cardiology, aiming to explore potential directions in future interventional treatments. To ensure the breadth and depth of this review, we implemented a systematic literature search strategy, selecting research published within the last five years. We conducted searches in databases such as PubMed and Google Scholar to find relevant literature. Special emphasis was placed on selecting large-scale studies to ensure the comprehensiveness and reliability of the results. This review summarizes the latest research directions and developments, ultimately analyzing their corresponding potential and limitations. It furnishes essential information and insights for researchers, clinicians, and policymakers, potentially propelling advancements and innovations within the domains of AI and IR. Finally, our findings indicate that although AI and robotics technologies are not yet widely applied in clinical settings, they are evolving across multiple aspects and are expected to significantly improve the processes and efficacy of interventional treatments.
Collapse
Affiliation(s)
- Jiaming Zhang
- Department of Radiology, Clinical Medical College, Southwest Medical University, Luzhou 646699, China; (J.Z.); (J.F.)
| | - Jiayi Fang
- Department of Radiology, Clinical Medical College, Southwest Medical University, Luzhou 646699, China; (J.Z.); (J.F.)
| | - Yanneng Xu
- Department of Radiology, Affiliated Traditional Chinese Medicine Hospital, Southwest Medical University, Luzhou 646699, China;
| | - Guangyan Si
- Department of Radiology, Affiliated Traditional Chinese Medicine Hospital, Southwest Medical University, Luzhou 646699, China;
| |
Collapse
|
38
|
Kanyal A, Mazumder B, Calhoun VD, Preda A, Turner J, Ford J, Ye DH. Multi-modal deep learning from imaging genomic data for schizophrenia classification. Front Psychiatry 2024; 15:1384842. [PMID: 39006822 PMCID: PMC11239396 DOI: 10.3389/fpsyt.2024.1384842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/11/2024] [Accepted: 05/23/2024] [Indexed: 07/16/2024] Open
Abstract
Background Schizophrenia (SZ) is a psychiatric condition that adversely affects an individual's cognitive, emotional, and behavioral aspects. The etiology of SZ, although extensively studied, remains unclear, as multiple factors come together to contribute toward its development. There is a consistent body of evidence documenting the presence of structural and functional deviations in the brains of individuals with SZ. Moreover, the hereditary aspect of SZ is supported by the significant involvement of genomics markers. Therefore, the need to investigate SZ from a multi-modal perspective and develop approaches for improved detection arises. Methods Our proposed method employed a deep learning framework combining features from structural magnetic resonance imaging (sMRI), functional magnetic resonance imaging (fMRI), and genetic markers such as single nucleotide polymorphism (SNP). For sMRI, we used a pre-trained DenseNet to extract the morphological features. To identify the most relevant functional connections in fMRI and SNPs linked to SZ, we applied a 1-dimensional convolutional neural network (CNN) followed by layerwise relevance propagation (LRP). Finally, we concatenated these obtained features across modalities and fed them to the extreme gradient boosting (XGBoost) tree-based classifier to classify SZ from healthy control (HC). Results Experimental evaluation on clinical dataset demonstrated that, compared to the outcomes obtained from each modality individually, our proposed multi-modal approach performed classification of SZ individuals from HC with an improved accuracy of 79.01%. Conclusion We proposed a deep learning based framework that selects multi-modal (sMRI, fMRI and genetic) features efficiently and fuse them to obtain improved classification scores. Additionally, by using Explainable AI (XAI), we were able to pinpoint and validate significant functional network connections and SNPs that contributed the most toward SZ classification, providing necessary interpretation behind our findings.
Collapse
Affiliation(s)
- Ayush Kanyal
- Department of Computer Science, Georgia State University, Atlanta, GA, United States
| | - Badhan Mazumder
- Department of Computer Science, Georgia State University, Atlanta, GA, United States
| | - Vince D Calhoun
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Atlanta, GA, United States
| | - Adrian Preda
- Department of Psychiatry and Human Behavior, Univeristy of California Irvine, Irvine, CA, United States
| | - Jessica Turner
- Department of Psychiatry and Behavioral Health, The Ohio State University, Columbus, OH, United States
| | - Judith Ford
- Department of Psychiatry, University of California, San Francisco, San Francisco, CA, United States
| | - Dong Hye Ye
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Atlanta, GA, United States
| |
Collapse
|
39
|
Savvidou F, Tegos SA, Diamantoulakis PD, Karagiannidis GK. Passive Radar Sensing for Human Activity Recognition: A Survey. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:700-706. [PMID: 39184964 PMCID: PMC11342921 DOI: 10.1109/ojemb.2024.3420747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2023] [Revised: 04/19/2024] [Accepted: 06/25/2024] [Indexed: 08/27/2024] Open
Abstract
Continuous and unobtrusive monitoring of daily human activities in homes can potentially improve the quality of life and prolong independent living for the elderly and people with chronic diseases by recognizing normal daily activities and detecting gradual changes in their conditions. However, existing human activity recognition (HAR) solutions employ wearable and video-based sensors, which either require dedicated devices to be carried by the user or raise privacy concerns. Radar sensors enable non-intrusive long-term monitoring, while they can exploit existing communication systems, e.g., Wi-Fi, as illuminators of opportunity. This survey provides an overview of passive radar system architectures, signal processing techniques, feature extraction, and machine learning's role in HAR applications. Moreover, it points out challenges in wireless human activity sensing research like robustness, privacy, and multiple user activity sensing and suggests possible future directions, including the coexistence of sensing and communications and the construction of open datasets.
Collapse
Affiliation(s)
- Foteini Savvidou
- Department of Electrical and Computer EngineeringAristotle University of Thessaloniki54124ThessalonikiGreece
| | - Sotiris A. Tegos
- Department of Electrical and Computer EngineeringAristotle University of Thessaloniki54124ThessalonikiGreece
| | | | - George K. Karagiannidis
- Department of Electrical and Computer EngineeringAristotle University of Thessaloniki54124ThessalonikiGreece
- Artificial Intelligence & Cyber Systems Research CenterLebanese American UniversityBeirut03797751Lebanon
| |
Collapse
|
40
|
Carrilero-Mardones M, Parras-Jurado M, Nogales A, Pérez-Martín J, Díez FJ. Deep Learning for Describing Breast Ultrasound Images with BI-RADS Terms. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01155-1. [PMID: 38926264 DOI: 10.1007/s10278-024-01155-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Revised: 05/12/2024] [Accepted: 05/13/2024] [Indexed: 06/28/2024]
Abstract
Breast cancer is the most common cancer in women. Ultrasound is one of the most used techniques for diagnosis, but an expert in the field is necessary to interpret the test. Computer-aided diagnosis (CAD) systems aim to help physicians during this process. Experts use the Breast Imaging-Reporting and Data System (BI-RADS) to describe tumors according to several features (shape, margin, orientation...) and estimate their malignancy, with a common language. To aid in tumor diagnosis with BI-RADS explanations, this paper presents a deep neural network for tumor detection, description, and classification. An expert radiologist described with BI-RADS terms 749 nodules taken from public datasets. The YOLO detection algorithm is used to obtain Regions of Interest (ROIs), and then a model, based on a multi-class classification architecture, receives as input each ROI and outputs the BI-RADS descriptors, the BI-RADS classification (with 6 categories), and a Boolean classification of malignancy. Six hundred of the nodules were used for 10-fold cross-validation (CV) and 149 for testing. The accuracy of this model was compared with state-of-the-art CNNs for the same task. This model outperforms plain classifiers in the agreement with the expert (Cohen's kappa), with a mean over the descriptors of 0.58 in CV and 0.64 in testing, while the second best model yielded kappas of 0.55 and 0.59, respectively. Adding YOLO to the model significantly enhances the performance (0.16 in CV and 0.09 in testing). More importantly, training the model with BI-RADS descriptors enables the explainability of the Boolean malignancy classification without reducing accuracy.
Collapse
Affiliation(s)
- Mikel Carrilero-Mardones
- Department of Artificial Intelligence, Universidad Nacional de Educacion a Distancia (UNED), Madrid, Spain.
| | | | - Alberto Nogales
- CEIEC Research Institute, Universidad Francisco de Vitoria, Madrid, Spain
| | - Jorge Pérez-Martín
- Department of Artificial Intelligence, Universidad Nacional de Educacion a Distancia (UNED), Madrid, Spain
| | - Francisco Javier Díez
- Department of Artificial Intelligence, Universidad Nacional de Educacion a Distancia (UNED), Madrid, Spain
| |
Collapse
|
41
|
Auzine MM, Heenaye-Mamode Khan M, Baichoo S, Gooda Sahib N, Bissoonauth-Daiboo P, Gao X, Heetun Z. Development of an ensemble CNN model with explainable AI for the classification of gastrointestinal cancer. PLoS One 2024; 19:e0305628. [PMID: 38917159 PMCID: PMC11198752 DOI: 10.1371/journal.pone.0305628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 06/02/2024] [Indexed: 06/27/2024] Open
Abstract
The implementation of AI assisted cancer detection systems in clinical environments has faced numerous hurdles, mainly because of the restricted explainability of their elemental mechanisms, even though such detection systems have proven to be highly effective. Medical practitioners are skeptical about adopting AI assisted diagnoses as due to the latter's inability to be transparent about decision making processes. In this respect, explainable artificial intelligence (XAI) has emerged to provide explanations for model predictions, thereby overcoming the computational black box problem associated with AI systems. In this particular research, the focal point has been the exploration of the Shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME) approaches which enable model prediction explanations. This study used an ensemble model consisting of three convolutional neural networks(CNN): InceptionV3, InceptionResNetV2 and VGG16, which was based on averaging techniques and by combining their respective predictions. These models were trained on the Kvasir dataset, which consists of pathological findings related to gastrointestinal cancer. An accuracy of 96.89% and F1-scores of 96.877% were attained by our ensemble model. Following the training of the ensemble model, we employed SHAP and LIME to analyze images from the three classes, aiming to provide explanations regarding the deterministic features influencing the model's predictions. The results obtained from this analysis demonstrated a positive and encouraging advancement in the exploration of XAI approaches, specifically in the context of gastrointestinal cancer detection within the healthcare domain.
Collapse
Affiliation(s)
| | | | - Sunilduth Baichoo
- Department of Software and Information Systems, University of Mauritius, Reduit, Mauritius
| | - Nuzhah Gooda Sahib
- Department of Software and Information Systems, University of Mauritius, Reduit, Mauritius
| | | | - Xiaohong Gao
- Department of Computer Science, Middlesex University London, London, United Kingdom
| | - Zaid Heetun
- Center for Gastroenterology and Hepatology, Dr Abdool Gaffoor Jeetoo Hospital, Port Louis, Mauritius
| |
Collapse
|
42
|
Fu C, Zhou Z, Xin Y, Weibel R. Reasoning cartographic knowledge in deep learning-based map generalization with explainable AI. INTERNATIONAL JOURNAL OF GEOGRAPHICAL INFORMATION SCIENCE : IJGIS 2024; 38:2061-2082. [PMID: 39318700 PMCID: PMC11418907 DOI: 10.1080/13658816.2024.2369535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 06/14/2024] [Accepted: 06/14/2024] [Indexed: 09/26/2024]
Abstract
Cartographic map generalization involves complex rules, and a full automation has still not been achieved, despite many efforts over the past few decades. Pioneering studies show that some map generalization tasks can be partially automated by deep neural networks (DNNs). However, DNNs are still used as black-box models in previous studies. We argue that integrating explainable AI (XAI) into a DL-based map generalization process can give more insights to develop and refine the DNNs by understanding what cartographic knowledge exactly is learned. Following an XAI framework for an empirical case study, visual analytics and quantitative experiments were applied to explain the importance of input features regarding the prediction of a pre-trained ResU-Net model. This experimental case study finds that the XAI-based visualization results can easily be interpreted by human experts. With the proposed XAI workflow, we further find that the DNN pays more attention to the building boundaries than the interior parts of the buildings. We thus suggest that boundary intersection over union is a better evaluation metric than commonly used intersection over union in qualifying raster-based map generalization results. Overall, this study shows the necessity and feasibility of integrating XAI as part of future DL-based map generalization development frameworks.
Collapse
Affiliation(s)
- Cheng Fu
- Department of Geography, University of Zurich, Zurich, Switzerland
| | - Zhiyong Zhou
- Department of Geography, University of Zurich, Zurich, Switzerland
| | - Yanan Xin
- Institute of Cartography and Geoinformation, ETH Zurich, Zurich, Switzerland
| | - Robert Weibel
- Department of Geography, University of Zurich, Zurich, Switzerland
| |
Collapse
|
43
|
Fanizzi A, Comes MC, Bove S, Cavalera E, de Franco P, Di Rito A, Errico A, Lioce M, Pati F, Portaluri M, Saponaro C, Scognamillo G, Troiano I, Troiano M, Zito FA, Massafra R. Explainable prediction model for the human papillomavirus status in patients with oropharyngeal squamous cell carcinoma using CNN on CT images. Sci Rep 2024; 14:14276. [PMID: 38902523 PMCID: PMC11189928 DOI: 10.1038/s41598-024-65240-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 06/18/2024] [Indexed: 06/22/2024] Open
Abstract
Several studies have emphasised how positive and negative human papillomavirus (HPV+ and HPV-, respectively) oropharyngeal squamous cell carcinoma (OPSCC) has distinct molecular profiles, tumor characteristics, and disease outcomes. Different radiomics-based prediction models have been proposed, by also using innovative techniques such as Convolutional Neural Networks (CNNs). Although some of these models reached encouraging predictive performances, there evidence explaining the role of radiomic features in achieving a specific outcome is scarce. In this paper, we propose some preliminary results related to an explainable CNN-based model to predict HPV status in OPSCC patients. We extracted the Gross Tumor Volume (GTV) of pre-treatment CT images related to 499 patients (356 HPV+ and 143 HPV-) included into the OPC-Radiomics public dataset to train an end-to-end Inception-V3 CNN architecture. We also collected a multicentric dataset consisting of 92 patients (43 HPV+ , 49 HPV-), which was employed as an independent test set. Finally, we applied Gradient-weighted Class Activation Mapping (Grad-CAM) technique to highlight the most informative areas with respect to the predicted outcome. The proposed model reached an AUC value of 73.50% on the independent test. As a result of the Grad-CAM algorithm, the most informative areas related to the correctly classified HPV+ patients were located into the intratumoral area. Conversely, the most important areas referred to the tumor edges. Finally, since the proposed model provided additional information with respect to the accuracy of the classification given by the visualization of the areas of greatest interest for predictive purposes for each case examined, it could contribute to increase confidence in using computer-based predictive models in the actual clinical practice.
Collapse
Affiliation(s)
- Annarita Fanizzi
- Laboratorio Biostatistica e Bioinformatica, I.R.C.C.S. Istituto Tumori 'Giovanni Paolo II', Bari, Italy
| | - Maria Colomba Comes
- Laboratorio Biostatistica e Bioinformatica, I.R.C.C.S. Istituto Tumori 'Giovanni Paolo II', Bari, Italy.
| | - Samantha Bove
- Laboratorio Biostatistica e Bioinformatica, I.R.C.C.S. Istituto Tumori 'Giovanni Paolo II', Bari, Italy.
| | - Elisa Cavalera
- Radiation Oncology Unit, Dipartimento di Oncoematologia, Ospedale Vito Fazzi, Lecce, Italy
| | - Paola de Franco
- Radiation Oncology Unit, Dipartimento di Oncoematologia, Ospedale Vito Fazzi, Lecce, Italy
| | | | - Angelo Errico
- Ospedale Monsignor Raffaele Dimiccoli, Barletta, Italy
| | - Marco Lioce
- Unità Operativa Complessa di Radioterpia, I.R.C.C.S. Istituto Tumori 'Giovanni Paolo II', Bari, Italy
| | | | | | - Concetta Saponaro
- Unità Operativa Complessi di Anatomia Patologia, I.R.C.C.S. Istituto Tumori 'Giovanni Paolo II', Bari, Italy
| | - Giovanni Scognamillo
- Unità Operativa Complessa di Radioterpia, I.R.C.C.S. Istituto Tumori 'Giovanni Paolo II', Bari, Italy
| | - Ippolito Troiano
- Radiation Oncology Department, Fondazione IRCCS "Casa Sollievo della Sofferenza", San Giovanni Rotondo, Italy
| | - Michele Troiano
- Radiation Oncology Department, Fondazione IRCCS "Casa Sollievo della Sofferenza", San Giovanni Rotondo, Italy
| | - Francesco Alfredo Zito
- Unità Operativa Complessi di Anatomia Patologia, I.R.C.C.S. Istituto Tumori 'Giovanni Paolo II', Bari, Italy
| | - Raffaella Massafra
- Laboratorio Biostatistica e Bioinformatica, I.R.C.C.S. Istituto Tumori 'Giovanni Paolo II', Bari, Italy
| |
Collapse
|
44
|
Yao J, Lim J, Lim GYS, Ong JCL, Ke Y, Tan TF, Tan TE, Vujosevic S, Ting DSW. Novel artificial intelligence algorithms for diabetic retinopathy and diabetic macular edema. EYE AND VISION (LONDON, ENGLAND) 2024; 11:23. [PMID: 38880890 PMCID: PMC11181581 DOI: 10.1186/s40662-024-00389-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 05/09/2024] [Indexed: 06/18/2024]
Abstract
BACKGROUND Diabetic retinopathy (DR) and diabetic macular edema (DME) are major causes of visual impairment that challenge global vision health. New strategies are needed to tackle these growing global health problems, and the integration of artificial intelligence (AI) into ophthalmology has the potential to revolutionize DR and DME management to meet these challenges. MAIN TEXT This review discusses the latest AI-driven methodologies in the context of DR and DME in terms of disease identification, patient-specific disease profiling, and short-term and long-term management. This includes current screening and diagnostic systems and their real-world implementation, lesion detection and analysis, disease progression prediction, and treatment response models. It also highlights the technical advancements that have been made in these areas. Despite these advancements, there are obstacles to the widespread adoption of these technologies in clinical settings, including regulatory and privacy concerns, the need for extensive validation, and integration with existing healthcare systems. We also explore the disparity between the potential of AI models and their actual effectiveness in real-world applications. CONCLUSION AI has the potential to revolutionize the management of DR and DME, offering more efficient and precise tools for healthcare professionals. However, overcoming challenges in deployment, regulatory compliance, and patient privacy is essential for these technologies to realize their full potential. Future research should aim to bridge the gap between technological innovation and clinical application, ensuring AI tools integrate seamlessly into healthcare workflows to enhance patient outcomes.
Collapse
Affiliation(s)
- Jie Yao
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Joshua Lim
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Gilbert Yong San Lim
- Duke-NUS Medical School, Singapore, Singapore
- SingHealth AI Health Program, Singapore, Singapore
| | - Jasmine Chiat Ling Ong
- Duke-NUS Medical School, Singapore, Singapore
- Division of Pharmacy, Singapore General Hospital, Singapore, Singapore
| | - Yuhe Ke
- Department of Anesthesiology and Perioperative Science, Singapore General Hospital, Singapore, Singapore
| | - Ting Fang Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
| | - Tien-En Tan
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Stela Vujosevic
- Department of Biomedical, Surgical and Dental Sciences, University of Milan, Milan, Italy
- Eye Clinic, IRCCS MultiMedica, Milan, Italy
| | - Daniel Shu Wei Ting
- Singapore National Eye Centre, Singapore Eye Research Institute, 11 Third Hospital Avenue, Singapore, 168751, Singapore.
- Duke-NUS Medical School, Singapore, Singapore.
- SingHealth AI Health Program, Singapore, Singapore.
| |
Collapse
|
45
|
Yoo SW, Yang S, Kim JE, Huh KH, Lee SS, Heo MS, Yi WJ. CACSNet for automatic robust classification and segmentation of carotid artery calcification on panoramic radiographs using a cascaded deep learning network. Sci Rep 2024; 14:13894. [PMID: 38886356 PMCID: PMC11183138 DOI: 10.1038/s41598-024-64265-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 06/06/2024] [Indexed: 06/20/2024] Open
Abstract
Stroke is one of the major causes of death worldwide, and is closely associated with atherosclerosis of the carotid artery. Panoramic radiographs (PRs) are routinely used in dental practice, and can be used to visualize carotid artery calcification (CAC). The purpose of this study was to automatically and robustly classify and segment CACs with large variations in size, shape, and location, and those overlapping with anatomical structures based on deep learning analysis of PRs. We developed a cascaded deep learning network (CACSNet) consisting of classification and segmentation networks for CACs on PRs. This network was trained on ground truth data accurately determined with reference to CT images using the Tversky loss function with optimized weights by balancing between precision and recall. CACSNet with EfficientNet-B4 achieved an AUC of 0.996, accuracy of 0.985, sensitivity of 0.980, and specificity of 0.988 in classification for normal or abnormal PRs. Segmentation performances for CAC lesions were 0.595 for the Jaccard index, 0.722 for the Dice similarity coefficient, 0.749 for precision, and 0.756 for recall. Our network demonstrated superior classification performance to previous methods based on PRs, and had comparable segmentation performance to studies based on other imaging modalities. Therefore, CACSNet can be used for robust classification and segmentation of CAC lesions that are morphologically variable and overlap with surrounding structures over the entire posterior inferior region of the mandibular angle on PRs.
Collapse
Affiliation(s)
- Suh-Woo Yoo
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Seoul National University, Seoul, Korea
| | - Su Yang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea
| | - Won-Jin Yi
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Korea.
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea.
| |
Collapse
|
46
|
Xu W, Liang X, Chen L, Hong W, Hu X. Biobanks in chronic disease management: A comprehensive review of strategies, challenges, and future directions. Heliyon 2024; 10:e32063. [PMID: 38868047 PMCID: PMC11168399 DOI: 10.1016/j.heliyon.2024.e32063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 05/27/2024] [Accepted: 05/28/2024] [Indexed: 06/14/2024] Open
Abstract
Biobanks, through the collection and storage of patient blood, tissue, genomic, and other biological samples, provide unique and rich resources for the research and management of chronic diseases such as cardiovascular diseases, diabetes, and cancer. These samples contain valuable cellular and molecular level information that can be utilized to decipher the pathogenesis of diseases, guide the development of novel diagnostic technologies, treatment methods, and personalized medical strategies. This article first outlines the historical evolution of biobanks, their classification, and the impact of technological advancements. Subsequently, it elaborates on the significant role of biobanks in revealing molecular biomarkers of chronic diseases, promoting the translation of basic research to clinical applications, and achieving individualized treatment and management. Additionally, challenges such as standardization of sample processing, information privacy, and security are discussed. Finally, from the perspectives of policy support, regulatory improvement, and public participation, this article provides a forecast on the future development directions of biobanks and strategies to address challenges, aiming to safeguard and enhance their unique advantages in supporting chronic disease prevention and treatment.
Collapse
Affiliation(s)
- Wanna Xu
- Shenzhen Center for Chronic Disease Control, Shenzhen Institute of Dermatology, Shenzhen, 518020, China
| | - Xiongshun Liang
- Shenzhen Center for Chronic Disease Control, Shenzhen Institute of Dermatology, Shenzhen, 518020, China
| | - Lin Chen
- Shenzhen Center for Chronic Disease Control, Shenzhen Institute of Dermatology, Shenzhen, 518020, China
| | - Wenxu Hong
- Shenzhen Center for Chronic Disease Control, Shenzhen Institute of Dermatology, Shenzhen, 518020, China
| | - Xuqiao Hu
- Shenzhen Center for Chronic Disease Control, Shenzhen Institute of Dermatology, Shenzhen, 518020, China
- Second Clinical Medical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology (Shenzhen People's Hospital), Shenzhen, China
| |
Collapse
|
47
|
Johnson H, Tipirneni-Sajja A. Explainable AI to Facilitate Understanding of Neural Network-Based Metabolite Profiling Using NMR Spectroscopy. Metabolites 2024; 14:332. [PMID: 38921467 PMCID: PMC11205398 DOI: 10.3390/metabo14060332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 06/05/2024] [Accepted: 06/10/2024] [Indexed: 06/27/2024] Open
Abstract
Neural networks (NNs) are emerging as a rapid and scalable method for quantifying metabolites directly from nuclear magnetic resonance (NMR) spectra, but the nonlinear nature of NNs precludes understanding of how a model makes predictions. This study implements an explainable artificial intelligence algorithm called integrated gradients (IG) to elucidate which regions of input spectra are the most important for the quantification of specific analytes. The approach is first validated in simulated mixture spectra of eight aqueous metabolites and then investigated in experimentally acquired lipid spectra of a reference standard mixture and a murine hepatic extract. The IG method revealed that, like a human spectroscopist, NNs recognize and quantify analytes based on an analyte's respective resonance line-shapes, amplitudes, and frequencies. NNs can compensate for peak overlap and prioritize specific resonances most important for concentration determination. Further, we show how modifying a NN training dataset can affect how a model makes decisions, and we provide examples of how this approach can be used to de-bug issues with model performance. Overall, results show that the IG technique facilitates a visual and quantitative understanding of how model inputs relate to model outputs, potentially making NNs a more attractive option for targeted and automated NMR-based metabolomics.
Collapse
Affiliation(s)
| | - Aaryani Tipirneni-Sajja
- Magnetic Resonance Imaging and Spectroscopy Lab, Department of Biomedical Engineering, The University of Memphis, Memphis, TN 38152, USA;
| |
Collapse
|
48
|
Ye DX, Yu JW, Li R, Hao YD, Wang TY, Yang H, Ding H. The Prediction of Recombination Hotspot Based on Automated Machine Learning. J Mol Biol 2024:168653. [PMID: 38871176 DOI: 10.1016/j.jmb.2024.168653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2024] [Revised: 05/12/2024] [Accepted: 06/06/2024] [Indexed: 06/15/2024]
Abstract
Meiotic recombination plays a pivotal role in genetic evolution. Genetic variation induced by recombination is a crucial factor in generating biodiversity and a driving force for evolution. At present, the development of recombination hotspot prediction methods has encountered challenges related to insufficient feature extraction and limited generalization capabilities. This paper focused on the research of recombination hotspot prediction methods. We explored deep learning-based recombination hotspot prediction and scrutinized the shortcomings of prevalent models in addressing the challenge of recombination hotspot prediction. To addressing these deficiencies, an automated machine learning approach was utilized to construct recombination hotspot prediction model. The model combined sequence information with physicochemical properties by employing TF-IDF-Kmer and DNA composition components to acquire more effective feature data. Experimental results validate the effectiveness of the feature extraction method and automated machine learning technology used in this study. The final model was validated on three distinct datasets and yielded accuracy rates of 97.14%, 79.71%, and 98.73%, surpassing the current leading models by 2%, 2.56%, and 4%, respectively. In addition, we incorporated tools such as SHAP and AutoGluon to analyze the interpretability of black-box models, delved into the impact of individual features on the results, and investigated the reasons behind misclassification of samples. Finally, an application of recombination hotspot prediction was established to facilitate easy access to necessary information and tools for researchers. The research outcomes of this paper underscore the enormous potential of automated machine learning methods in gene sequence prediction.
Collapse
Affiliation(s)
- Dong-Xin Ye
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Jun-Wen Yu
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Rui Li
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Yu-Duo Hao
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Tian-Yu Wang
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Hui Yang
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, Zhejiang, China.
| | - Hui Ding
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China.
| |
Collapse
|
49
|
Liang B, Qin H, Nong X, Zhang X. Classification of Ameloblastoma, Periapical Cyst, and Chronic Suppurative Osteomyelitis with Semi-Supervised Learning: The WaveletFusion-ViT Model Approach. Bioengineering (Basel) 2024; 11:571. [PMID: 38927807 PMCID: PMC11200596 DOI: 10.3390/bioengineering11060571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 05/31/2024] [Accepted: 06/03/2024] [Indexed: 06/28/2024] Open
Abstract
Ameloblastoma (AM), periapical cyst (PC), and chronic suppurative osteomyelitis (CSO) are prevalent maxillofacial diseases with similar imaging characteristics but different treatments, thus making preoperative differential diagnosis crucial. Existing deep learning methods for diagnosis often require manual delineation in tagging the regions of interest (ROIs), which triggers some challenges in practical application. We propose a new model of Wavelet Extraction and Fusion Module with Vision Transformer (WaveletFusion-ViT) for automatic diagnosis using CBCT panoramic images. In this study, 539 samples containing healthy (n = 154), AM (n = 181), PC (n = 102), and CSO (n = 102) were acquired by CBCT for classification, with an additional 2000 healthy samples for pre-training the domain-adaptive network (DAN). The WaveletFusion-ViT model was initialized with pre-trained weights obtained from the DAN and further trained using semi-supervised learning (SSL) methods. After five-fold cross-validation, the model achieved average sensitivity, specificity, accuracy, and AUC scores of 79.60%, 94.48%, 91.47%, and 0.942, respectively. Remarkably, our method achieved 91.47% accuracy using less than 20% labeled samples, surpassing the fully supervised approach's accuracy of 89.05%. Despite these promising results, this study's limitations include a low number of CSO cases and a relatively lower accuracy for this condition, which should be addressed in future research. This research is regarded as an innovative approach as it deviates from the fully supervised learning paradigm typically employed in previous studies. The WaveletFusion-ViT model effectively combines SSL methods to effectively diagnose three types of CBCT panoramic images using only a small portion of labeled data.
Collapse
Affiliation(s)
- Bohui Liang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China;
| | - Hongna Qin
- School of Information and Management, Guangxi Medical University, Nanning 530021, China;
| | - Xiaolin Nong
- College & Hospital of Stomatology, Guangxi Medical University, Nanning 530021, China
| | - Xuejun Zhang
- School of Computer, Electronics and Information, Guangxi University, Nanning 530004, China;
| |
Collapse
|
50
|
Robson B, Cooper R. Glass Box and Black Box Machine Learning Approaches to Exploit Compositional Descriptors of Molecules in Drug Discovery and Aid the Medicinal Chemist. ChemMedChem 2024:e202400169. [PMID: 38837320 DOI: 10.1002/cmdc.202400169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Revised: 05/29/2024] [Accepted: 06/03/2024] [Indexed: 06/07/2024]
Abstract
The synthetic medicinal chemist plays a vital role in drug discovery. Today there are AI tools to guide next syntheses, but many are "Black Boxes" (BB). One learns little more than the prediction made. There are now also AI methods emphasizing visibility and "explainability" (thus explainable AI or XAI) that could help when "compositional data" are used, but they often still start from seemingly arbitrary learned weights and lack familiar probabilistic measures based on observation and counting from the outset. If probabilistic methods were used in a complementary way with BB methods and demonstrated comparable predictive power, they would provide guidelines about what groups to include and avoid in next syntheses and quantify the relationships in probabilistic terms. These points are demonstrated by blind test comparison of two main types of BB methods and a probabilistic "Glass Box" (GB) method new outside of medicine, but which appears well suited to the above. Because many probabilities can be involved, emphasis is on the predictive power of its simplest explanatory models. There are usually more inactive compounds by orders of magnitude, often a problem for machine learning methods. However, the approaches used here appear to work well for such "real world data".
Collapse
Affiliation(s)
- Barry Robson
- Ingine Inc., 2723 Rocklyn Road, Cleveland, OH-44122, USA
- The Dirac Foundation, c/o The Academy Partnership Ltd., Windrush Park, Witney, OX2929, UK
| | - Richard Cooper
- Oxford Drug Design, Oxford Centre for Innovation, New Rd, Oxford, OX1 3TA, UK
- Department of Chemistry, 12 Mansfield Road, Oxford, OX1 1BY, UK
| |
Collapse
|