1
|
Warren SL, Khan DM, Moustafa AA. Assistive tools for classifying neurological disorders using fMRI and deep learning: A guide and example. Brain Behav 2024; 14:e3554. [PMID: 38841732 PMCID: PMC11154821 DOI: 10.1002/brb3.3554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 05/02/2024] [Accepted: 05/03/2024] [Indexed: 06/07/2024] Open
Abstract
BACKGROUND Deep-learning (DL) methods are rapidly changing the way researchers classify neurological disorders. For example, combining functional magnetic resonance imaging (fMRI) and DL has helped researchers identify functional biomarkers of neurological disorders (e.g., brain activation and connectivity) and pilot innovative diagnostic models. However, the knowledge required to perform DL analyses is often domain-specific and is not widely taught in the brain sciences (e.g., psychology, neuroscience, and cognitive science). Conversely, neurological diagnoses and neuroimaging training (e.g., fMRI) are largely restricted to the brain and medical sciences. In turn, these disciplinary knowledge barriers and distinct specializations can act as hurdles that prevent the combination of fMRI and DL pipelines. The complexity of fMRI and DL methods also hinders their clinical adoption and generalization to real-world diagnoses. For example, most current models are not designed for clinical settings or use by nonspecialized populations such as students, clinicians, and healthcare workers. Accordingly, there is a growing area of assistive tools (e.g., software and programming packages) that aim to streamline and increase the accessibility of fMRI and DL pipelines for the diagnoses of neurological disorders. OBJECTIVES AND METHODS In this study, we present an introductory guide to some popular DL and fMRI assistive tools. We also create an example autism spectrum disorder (ASD) classification model using assistive tools (e.g., Optuna, GIFT, and the ABIDE preprocessed repository), fMRI, and a convolutional neural network. RESULTS In turn, we provide researchers with a guide to assistive tools and give an example of a streamlined fMRI and DL pipeline. CONCLUSIONS We are confident that this study can help more researchers enter the field and create accessible fMRI and deep-learning diagnostic models for neurological disorders.
Collapse
Affiliation(s)
- Samuel L. Warren
- Faculty of Society and Design, School of PsychologyBond UniversityGold CoastQueenslandAustralia
| | - Danish M. Khan
- Department of Electronic EngineeringNED University of Engineering & TechnologyKarachiSindhPakistan
| | - Ahmed A. Moustafa
- Faculty of Society and Design, School of PsychologyBond UniversityGold CoastQueenslandAustralia
- The Faculty of Health Sciences, Department of Human Anatomy and PhysiologyUniversity of JohannesburgAuckland ParkSouth Africa
| |
Collapse
|
2
|
Artemenko NV, Genaev MA, Epifanov RUI, Komyshev EG, Kruchinina YV, Koval VS, Goncharov NP, Afonnikov DA. Image-based classification of wheat spikes by glume pubescence using convolutional neural networks. FRONTIERS IN PLANT SCIENCE 2024; 14:1336192. [PMID: 38283969 PMCID: PMC10811101 DOI: 10.3389/fpls.2023.1336192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Accepted: 12/20/2023] [Indexed: 01/30/2024]
Abstract
Introduction Pubescence is an important phenotypic trait observed in both vegetative and generative plant organs. Pubescent plants demonstrate increased resistance to various environmental stresses such as drought, low temperatures, and pests. It serves as a significant morphological marker and aids in selecting stress-resistant cultivars, particularly in wheat. In wheat, pubescence is visible on leaves, leaf sheath, glumes and nodes. Regarding glumes, the presence of pubescence plays a pivotal role in its classification. It supplements other spike characteristics, aiding in distinguishing between different varieties within the wheat species. The determination of pubescence typically involves visual analysis by an expert. However, methods without the use of binocular loupe tend to be subjective, while employing additional equipment is labor-intensive. This paper proposes an integrated approach to determine glume pubescence presence in spike images captured under laboratory conditions using a digital camera and convolutional neural networks. Methods Initially, image segmentation is conducted to extract the contour of the spike body, followed by cropping of the spike images to an equal size. These images are then classified based on glume pubescence (pubescent/glabrous) using various convolutional neural network architectures (Resnet-18, EfficientNet-B0, and EfficientNet-B1). The networks were trained and tested on a dataset comprising 9,719 spike images. Results For segmentation, the U-Net model with EfficientNet-B1 encoder was chosen, achieving the segmentation accuracy IoU = 0.947 for the spike body and 0.777 for awns. The classification model for glume pubescence with the highest performance utilized the EfficientNet-B1 architecture. On the test sample, the model exhibited prediction accuracy parameters of F1 = 0.85 and AUC = 0.96, while on the holdout sample it showed F1 = 0.84 and AUC = 0.89. Additionally, the study investigated the relationship between image scale, artificial distortions, and model prediction performance, revealing that higher magnification and smaller distortions yielded a more accurate prediction of glume pubescence.
Collapse
Affiliation(s)
- Nikita V. Artemenko
- Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
- Department of Mathematics and Mechanics, Novosibirsk State University, Novosibirsk, Russia
| | - Mikhail A. Genaev
- Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
- Kurchatov Center for Genome Research, Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
| | - Rostislav UI. Epifanov
- Department of Mathematics and Mechanics, Novosibirsk State University, Novosibirsk, Russia
| | - Evgeny G. Komyshev
- Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
| | - Yulia V. Kruchinina
- Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
- Kurchatov Center for Genome Research, Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
| | - Vasiliy S. Koval
- Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
- Kurchatov Center for Genome Research, Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
| | - Nikolay P. Goncharov
- Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
| | - Dmitry A. Afonnikov
- Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
- Department of Mathematics and Mechanics, Novosibirsk State University, Novosibirsk, Russia
- Kurchatov Center for Genome Research, Institute of Cytology and Genetics of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia
| |
Collapse
|
3
|
Siddiqui F, Aslam D, Tanveer K, Soudy M. The Role of Artificial Intelligence and Machine Learning in Autoimmune Disorders. STUDIES IN COMPUTATIONAL INTELLIGENCE 2024:61-75. [DOI: 10.1007/978-981-99-9029-0_3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/24/2024]
|
4
|
Feng J, Yang K, Liu X, Song M, Zhan P, Zhang M, Chen J, Liu J. Machine learning: a powerful tool for identifying key microbial agents associated with specific cancer types. PeerJ 2023; 11:e16304. [PMID: 37901464 PMCID: PMC10601900 DOI: 10.7717/peerj.16304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Accepted: 09/26/2023] [Indexed: 10/31/2023] Open
Abstract
Machine learning (ML) includes a broad class of computer programs that improve with experience and shows unique strengths in performing tasks such as clustering, classification and regression. Over the past decade, microbial communities have been implicated in influencing the onset, progression, metastasis, and therapeutic response of multiple cancers. Host-microbe interaction may be a physiological pathway contributing to cancer development. With the accumulation of a large number of high-throughput data, ML has been successfully applied to the study of human cancer microbiomics in an attempt to reveal the complex mechanism behind cancer. In this review, we begin with a brief overview of the data sources included in cancer microbiomics studies. Then, the characteristics of the ML algorithm are briefly introduced. Secondly, the application progress of ML in cancer microbiomics is also reviewed. Finally, we highlight the challenges and future prospects facing ML in cancer microbiomics. On this basis, we conclude that the development of cancer microbiomics can not be achieved without ML, and that ML can be used to develop tumor-targeting microbial therapies, ultimately contributing to personalized and precision medicine.
Collapse
Affiliation(s)
- Jia Feng
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Sichuan Province Engineering Technology Research Center of Molecular Diagnosis of Clinical Diseases, Molecular Diagnosis of Clinical Diseases Key Laboratory of Luzhou, Sichuan, China
| | - Kailan Yang
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Sichuan Province Engineering Technology Research Center of Molecular Diagnosis of Clinical Diseases, Molecular Diagnosis of Clinical Diseases Key Laboratory of Luzhou, Sichuan, China
| | - Xuexue Liu
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Sichuan Province Engineering Technology Research Center of Molecular Diagnosis of Clinical Diseases, Molecular Diagnosis of Clinical Diseases Key Laboratory of Luzhou, Sichuan, China
| | - Min Song
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Sichuan Province Engineering Technology Research Center of Molecular Diagnosis of Clinical Diseases, Molecular Diagnosis of Clinical Diseases Key Laboratory of Luzhou, Sichuan, China
| | - Ping Zhan
- Department of Obstetrics, The Affiliated Hospital of Southwest Medical University, Luzhou, Sichuan, China
| | - Mi Zhang
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Sichuan Province Engineering Technology Research Center of Molecular Diagnosis of Clinical Diseases, Molecular Diagnosis of Clinical Diseases Key Laboratory of Luzhou, Sichuan, China
| | - Jinsong Chen
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Sichuan Province Engineering Technology Research Center of Molecular Diagnosis of Clinical Diseases, Molecular Diagnosis of Clinical Diseases Key Laboratory of Luzhou, Sichuan, China
| | - Jinbo Liu
- Department of Laboratory Medicine, The Affiliated Hospital of Southwest Medical University, Sichuan Province Engineering Technology Research Center of Molecular Diagnosis of Clinical Diseases, Molecular Diagnosis of Clinical Diseases Key Laboratory of Luzhou, Sichuan, China
| |
Collapse
|
5
|
Liu Y, Mazumdar S, Bath PA. An unsupervised learning approach to diagnosing Alzheimer's disease using brain magnetic resonance imaging scans. Int J Med Inform 2023; 173:105027. [PMID: 36921480 DOI: 10.1016/j.ijmedinf.2023.105027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 02/22/2023] [Accepted: 02/23/2023] [Indexed: 03/06/2023]
Abstract
BACKGROUND Alzheimer's disease (AD) is the most common cause of dementia, characterised by behavioural and cognitive impairment. Due to the lack of effectiveness of manual diagnosis by doctors, machine learning is now being applied to diagnose AD in many recent studies. Most research developing machine learning algorithms to diagnose AD use supervised learning to classify magnetic resonance imaging (MRI) scans. However, supervised learning requires a considerable volume of labelled data and MRI scans are difficult to label. OBJECTIVE This study applied a statistical method and unsupervised learning methods to discriminate between scans from cognitively normal (CN) and people with AD using a limited number of labelled structural MRI scans. METHODS We used two-sample t-tests to detect the AD-relevant regions, and then employed an unsupervised learning neural network to extract features from the regions. Finally, a clustering algorithm was implemented to discriminate between CN and AD data based on the extracted features. The approach was tested on baseline brain structural MRI scans from 429 individuals from the Alzheimer's Disease Neuroimaging Initiative (ADNI), of which 231 were CN and 198 had AD. RESULTS The abnormal regions around the lower parts of limbic system were indicated as AD-relevant regions based on the two-sample t-test (p < 0.001), and the proposed method yielded an accuracy of 0.84 for discriminating between CN and AD. CONCLUSION The study combined statistical and unsupervised learning methods to identify scans of people with AD. This method can detect AD-relevant regions and could be used to accurately diagnose AD; it does not require large amounts of labelled MRI scans. Our research could help in the automatic diagnosis of AD and provide a basis for diagnosing stable mild cognitive impairment (stable MCI) and progressive mild cognitive impairment (progressive MCI).
Collapse
Affiliation(s)
- Yuyang Liu
- Information School, University of Sheffield, 211 Portobello, Sheffield S1 4DP, UK.
| | - Suvodeep Mazumdar
- Information School, University of Sheffield, 211 Portobello, Sheffield S1 4DP, UK
| | - Peter A Bath
- Information School, University of Sheffield, 211 Portobello, Sheffield S1 4DP, UK
| | -
- Information School, University of Sheffield, 211 Portobello, Sheffield S1 4DP, UK
| |
Collapse
|
6
|
Georgiadou E, Bougias H, Leandrou S, Stogiannos N. Radiomics for Alzheimer's Disease: Fundamental Principles and Clinical Applications. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2023; 1424:297-311. [PMID: 37486507 DOI: 10.1007/978-3-031-31982-2_34] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/25/2023]
Abstract
Alzheimer's disease is a neurodegenerative disease with a huge impact on people's quality of life, life expectancy, and morbidity. The ongoing prevalence of the disease, in conjunction with an increased financial burden to healthcare services, necessitates the development of new technologies to be employed in this field. Hence, advanced computational methods have been developed to facilitate early and accurate diagnosis of the disease and improve all health outcomes. Artificial intelligence is now deeply involved in the fight against this disease, with many clinical applications in the field of medical imaging. Deep learning approaches have been tested for use in this domain, while radiomics, an emerging quantitative method, are already being evaluated to be used in various medical imaging modalities. This chapter aims to provide an insight into the fundamental principles behind radiomics, discuss the most common techniques alongside their strengths and weaknesses, and suggest ways forward for future research standardization and reproducibility.
Collapse
Affiliation(s)
- Eleni Georgiadou
- Department of Radiology, Metaxa Anticancer Hospital, Piraeus, Greece
| | - Haralabos Bougias
- Department of Clinical Radiology, University Hospital of Ioannina, Ioannina, Greece
| | - Stephanos Leandrou
- Department of Health Sciences, School of Sciences, European University Cyprus, Engomi, Cyprus
| | - Nikolaos Stogiannos
- Discipline of Medical Imaging and Radiation Therapy, University College Cork, Cork, Ireland.
- Division of Midwifery & Radiography, City, University of London, London, UK.
- Medical Imaging Department, Corfu General Hospital, Corfu, Greece.
| |
Collapse
|
7
|
Guo J, Cao W, Nie B, Qin Q. Unsupervised Learning Composite Network to Reduce Training Cost of Deep Learning Model for Colorectal Cancer Diagnosis. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 11:54-59. [PMID: 36544891 PMCID: PMC9762730 DOI: 10.1109/jtehm.2022.3224021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 10/31/2022] [Accepted: 11/18/2022] [Indexed: 11/23/2022]
Abstract
Deep learning facilitates complex medical data analysis and is increasingly being explored in colorectal cancer diagnostics. However, the training cost of the deep learning model limits its real-world medical utility. In this study, we present a composite network that combines deep learning and unsupervised K-means clustering algorithm (RK-net) for automatic processing of medical images. RK-net was more efficient in image refinement compared with manual screening and annotation. The training of a deep learning model for colorectal cancer diagnosis was accelerated by two times with utilization of RK-net-processed images. Better performance was observed in training loss and accuracy achievement as well. RK-net could be useful to refine medical images of the ever-expanding quantity and assist in subsequent construction of the artificial intelligence model.
Collapse
Affiliation(s)
- Jirui Guo
- Department of Colorectal SurgeryThe Sixth Affiliated Hospital, Sun Yat-sen University Guangzhou 510655 China
| | - Wuteng Cao
- Department of RadiologyThe Sixth Affiliated Hospital, Sun Yat-sen University Guangzhou 510655 China
| | - Bairun Nie
- School of Electrical Computer and Telecommunications EngineeringUniversity of Wollongong Wollongong NSW 2522 Australia
| | - Qiyuan Qin
- Department of Colorectal SurgeryThe Sixth Affiliated Hospital, Sun Yat-sen University Guangzhou 510655 China
| |
Collapse
|
8
|
Mahbod A, Schaefer G, Dorffner G, Hatamikia S, Ecker R, Ellinger I. A dual decoder U-Net-based model for nuclei instance segmentation in hematoxylin and eosin-stained histological images. Front Med (Lausanne) 2022; 9:978146. [PMID: 36438040 PMCID: PMC9691672 DOI: 10.3389/fmed.2022.978146] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 10/28/2022] [Indexed: 11/03/2023] Open
Abstract
Even in the era of precision medicine, with various molecular tests based on omics technologies available to improve the diagnosis process, microscopic analysis of images derived from stained tissue sections remains crucial for diagnostic and treatment decisions. Among other cellular features, both nuclei number and shape provide essential diagnostic information. With the advent of digital pathology and emerging computerized methods to analyze the digitized images, nuclei detection, their instance segmentation and classification can be performed automatically. These computerized methods support human experts and allow for faster and more objective image analysis. While methods ranging from conventional image processing techniques to machine learning-based algorithms have been proposed, supervised convolutional neural network (CNN)-based techniques have delivered the best results. In this paper, we propose a CNN-based dual decoder U-Net-based model to perform nuclei instance segmentation in hematoxylin and eosin (H&E)-stained histological images. While the encoder path of the model is developed to perform standard feature extraction, the two decoder heads are designed to predict the foreground and distance maps of all nuclei. The outputs of the two decoder branches are then merged through a watershed algorithm, followed by post-processing refinements to generate the final instance segmentation results. Moreover, to additionally perform nuclei classification, we develop an independent U-Net-based model to classify the nuclei predicted by the dual decoder model. When applied to three publicly available datasets, our method achieves excellent segmentation performance, leading to average panoptic quality values of 50.8%, 51.3%, and 62.1% for the CryoNuSeg, NuInsSeg, and MoNuSAC datasets, respectively. Moreover, our model is the top-ranked method in the MoNuSAC post-challenge leaderboard.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
| | - Gerald Schaefer
- Department of Computer Science, Loughborough University, Loughborough, United Kingdom
| | - Georg Dorffner
- Institute of Artificial Intelligence, Medical University of Vienna, Vienna, Austria
| | - Sepideh Hatamikia
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
- Austrian Center for Medical Innovation and Technology, Wiener Neustadt, Austria
| | - Rupert Ecker
- Department of Research and Development, TissueGnostics GmbH, Vienna, Austria
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
9
|
An automated unsupervised deep learning–based approach for diabetic retinopathy detection. Med Biol Eng Comput 2022; 60:3635-3654. [DOI: 10.1007/s11517-022-02688-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 10/02/2022] [Indexed: 11/07/2022]
|
10
|
Paderno A, Gennarini F, Sordi A, Montenegro C, Lancini D, Villani FP, Moccia S, Piazza C. Artificial intelligence in clinical endoscopy: Insights in the field of videomics. Front Surg 2022; 9:933297. [PMID: 36171813 PMCID: PMC9510389 DOI: 10.3389/fsurg.2022.933297] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 08/22/2022] [Indexed: 11/13/2022] Open
Abstract
Artificial intelligence is being increasingly seen as a useful tool in medicine. Specifically, these technologies have the objective to extract insights from complex datasets that cannot easily be analyzed by conventional statistical methods. While promising results have been obtained for various -omics datasets, radiological images, and histopathologic slides, analysis of videoendoscopic frames still represents a major challenge. In this context, videomics represents a burgeoning field wherein several methods of computer vision are systematically used to organize unstructured data from frames obtained during diagnostic videoendoscopy. Recent studies have focused on five broad tasks with increasing complexity: quality assessment of endoscopic images, classification of pathologic and nonpathologic frames, detection of lesions inside frames, segmentation of pathologic lesions, and in-depth characterization of neoplastic lesions. Herein, we present a broad overview of the field, with a focus on conceptual key points and future perspectives.
Collapse
Affiliation(s)
- Alberto Paderno
- Unit of Otorhinolaryngology—Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, School of Medicine, University of Brescia, Brescia, Italy
- Correspondence: Alberto Paderno
| | - Francesca Gennarini
- Unit of Otorhinolaryngology—Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, School of Medicine, University of Brescia, Brescia, Italy
| | - Alessandra Sordi
- Unit of Otorhinolaryngology—Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, School of Medicine, University of Brescia, Brescia, Italy
| | - Claudia Montenegro
- Unit of Otorhinolaryngology—Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, School of Medicine, University of Brescia, Brescia, Italy
| | - Davide Lancini
- Unit of Otorhinolaryngology—Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy
| | - Francesca Pia Villani
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, Pisa, Italy
| | - Sara Moccia
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, Pisa, Italy
| | - Cesare Piazza
- Unit of Otorhinolaryngology—Head and Neck Surgery, ASST Spedali Civili of Brescia, Brescia, Italy
- Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, School of Medicine, University of Brescia, Brescia, Italy
| |
Collapse
|
11
|
Park S, Kim G, Oh Y, Seo JB, Lee SM, Kim JH, Moon S, Lim JK, Park CM, Ye JC. Self-evolving vision transformer for chest X-ray diagnosis through knowledge distillation. Nat Commun 2022; 13:3848. [PMID: 35789159 PMCID: PMC9252561 DOI: 10.1038/s41467-022-31514-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 06/16/2022] [Indexed: 11/14/2022] Open
Abstract
Although deep learning-based computer-aided diagnosis systems have recently achieved expert-level performance, developing a robust model requires large, high-quality data with annotations that are expensive to obtain. This situation poses a conundrum that annually-collected chest x-rays cannot be utilized due to the absence of labels, especially in deprived areas. In this study, we present a framework named distillation for self-supervision and self-train learning (DISTL) inspired by the learning process of the radiologists, which can improve the performance of vision transformer simultaneously with self-supervision and self-training through knowledge distillation. In external validation from three hospitals for diagnosis of tuberculosis, pneumothorax, and COVID-19, DISTL offers gradually improved performance as the amount of unlabeled data increase, even better than the fully supervised model with the same amount of labeled data. We additionally show that the model obtained with DISTL is robust to various real-world nuisances, offering better applicability in clinical setting. Although deep learning-based computer-aided diagnosis systems have recently achieved expert level performance, developing a robust model requires large, high-quality data with annotations. Here, the authors present a framework which can improve the performance of vision transformer simultaneously with self-supervision and self-training.
Collapse
Affiliation(s)
- Sangjoon Park
- Department of Bio and Brain Engineering, KAIST, Daejeon, Korea
| | - Gwanghyun Kim
- Department of Bio and Brain Engineering, KAIST, Daejeon, Korea
| | - Yujin Oh
- Department of Bio and Brain Engineering, KAIST, Daejeon, Korea
| | - Joon Beom Seo
- Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Sang Min Lee
- Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Jin Hwan Kim
- College of Medicine, Chungnam National Univerity, Daejeon, South Korea
| | - Sungjun Moon
- College of Medicine, Yeungnam University, Daegu, South Korea
| | - Jae-Kwang Lim
- School of Medicine, Kyungpook National University, Daegu, South Korea
| | - Chang Min Park
- College of Medicine, Seoul National University, Seoul, South Korea
| | - Jong Chul Ye
- Department of Bio and Brain Engineering, KAIST, Daejeon, Korea. .,Kim Jaechul Graduate School of AI, KAIST, Daejeon, Korea.
| |
Collapse
|
12
|
Barragán-Montero A, Bibal A, Dastarac MH, Draguet C, Valdés G, Nguyen D, Willems S, Vandewinckele L, Holmström M, Löfman F, Souris K, Sterpin E, Lee JA. Towards a safe and efficient clinical implementation of machine learning in radiation oncology by exploring model interpretability, explainability and data-model dependency. Phys Med Biol 2022; 67:10.1088/1361-6560/ac678a. [PMID: 35421855 PMCID: PMC9870296 DOI: 10.1088/1361-6560/ac678a] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 04/14/2022] [Indexed: 01/26/2023]
Abstract
The interest in machine learning (ML) has grown tremendously in recent years, partly due to the performance leap that occurred with new techniques of deep learning, convolutional neural networks for images, increased computational power, and wider availability of large datasets. Most fields of medicine follow that popular trend and, notably, radiation oncology is one of those that are at the forefront, with already a long tradition in using digital images and fully computerized workflows. ML models are driven by data, and in contrast with many statistical or physical models, they can be very large and complex, with countless generic parameters. This inevitably raises two questions, namely, the tight dependence between the models and the datasets that feed them, and the interpretability of the models, which scales with its complexity. Any problems in the data used to train the model will be later reflected in their performance. This, together with the low interpretability of ML models, makes their implementation into the clinical workflow particularly difficult. Building tools for risk assessment and quality assurance of ML models must involve then two main points: interpretability and data-model dependency. After a joint introduction of both radiation oncology and ML, this paper reviews the main risks and current solutions when applying the latter to workflows in the former. Risks associated with data and models, as well as their interaction, are detailed. Next, the core concepts of interpretability, explainability, and data-model dependency are formally defined and illustrated with examples. Afterwards, a broad discussion goes through key applications of ML in workflows of radiation oncology as well as vendors' perspectives for the clinical implementation of ML.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Adrien Bibal
- PReCISE, NaDI Institute, Faculty of Computer Science, UNamur and CENTAL, ILC, UCLouvain, Belgium
| | - Margerie Huet Dastarac
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Camille Draguet
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, United States of America
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, United States of America
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| |
Collapse
|
13
|
Chopannejad S, Sadoughi F, Bagherzadeh R, Shekarchi S. Predicting major adverse cardiovascular events in acute coronary syndrome: A scoping review of machine learning approaches. Appl Clin Inform 2022; 13:720-740. [PMID: 35617971 PMCID: PMC9329142 DOI: 10.1055/a-1863-1589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022] Open
Abstract
BACKGROUND Acute coronary syndrome is the topmost cause of death worldwide; therefore, it is necessary to predict major adverse cardiovascular events and cardiovascular deaths in patients with acute coronary syndrome to make correct and timely clinical decisions. OBJECTIVE The current review aimed to highlight algorithms and important predictor variables through examining those studies which used machine learning algorithms for predicting major adverse cardiovascular events in patients with acute coronary syndrome. METHODS In order to predict major adverse cardiovascular events in patients with acute coronary syndrome, the preferred reporting items for scoping reviews guidelines were used. PubMed, Embase, Web of Science, Scopus, Springer, and IEEE Xplore databases were searched for articles published between 2005 and 2021. The findings of the studies are presented in the form of a narrative synthesis of evidence. RESULTS According to the results, 14 (63.64%) studies did not perform external validation and only used registry data. The algorithms used in this study comprised, inter alia, Regression Logistic, Random Forest, Boosting Ensemble, Non-Boosting Ensemble, Decision Trees, and Naive Bayes. Multiple studies (N=20) achieved a high Area under the ROC Curve between 0.8 to 0.99 in predicting mortality and major adverse cardiovascular events. The predictor variables used in these studies were divided into demographic, clinical, and therapeutic features. However, no study reported the integration of machine learning model into clinical practice. CONCLUSION Machine learning algorithms rendered acceptable results to predict major adverse cardiovascular events and mortality outcomes in patients with acute coronary syndrome. However, these approaches have never been integrated into clinical practice. Further research is required to develop feasible and effective machine learning prediction models to measure their potentially important implications for optimizing the quality of care in patients with acute coronary syndrome.
Collapse
Affiliation(s)
- Sara Chopannejad
- Student Research Committee, School of Health Management and Information Sciences, Iran University of Medical Sciences, Tehran, Iran (the Islamic Republic of)
| | - Farahnaz Sadoughi
- School of Health Management and Information Sciences, Iran University of Medical Sciences, Tehran, Iran (the Islamic Republic of)
| | - Rafat Bagherzadeh
- English Language Department, School of Health Management and Information Sciences, Iran University of Medical Sciences, Tehran, Iran (the Islamic Republic of)
| | - Sakineh Shekarchi
- School of Health Management and Information Sciences, Iran University of Medical Sciences, Tehran, Iran (the Islamic Republic of)
| |
Collapse
|
14
|
Abstract
Cancer is the second leading cause of death worldwide, and the death rate of lung cancer is much higher than other types of cancers. In recent years, numerous novel computer-aided diagnostic techniques with deep learning have been designed to detect lung cancer in early stages. However, deep learning models are easy to overfit, and the overfitting problem always causes lower performance. To solve this problem of lung cancer classification tasks, we proposed a hybrid framework called LCGANT. Specifically, our framework contains two main parts. The first part is a lung cancer deep convolutional GAN (LCGAN) to generate synthetic lung cancer images. The second part is a regularization enhanced transfer learning model called VGG-DF to classify lung cancer images into three classes. Our framework achieves a result of 99.84%±0.156% (accuracy), 99.84%±0.153% (precision), 99.84%±0.156% (sensitivity), and 99.84%±0.156% (F1-score). The result reaches the highest performance of the dataset for the lung cancer classification task. The proposed framework resolves the overfitting problem for lung cancer classification tasks, and it achieves better performance than other state-of-the-art methods.
Collapse
|
15
|
Ren Z, Zhang Y, Wang S. A Hybrid Framework for Lung Cancer Classification. ELECTRONICS 2022; 11:1614. [PMID: 36568860 PMCID: PMC7613986 DOI: 10.3390/electronics1010000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Cancer is the second leading cause of death worldwide, and the death rate of lung cancer is much higher than other types of cancers. In recent years, numerous novel computer-aided diagnostic techniques with deep learning have been designed to detect lung cancer in early stages. However, deep learning models are easy to overfit, and the overfitting problem always causes lower performance. To solve this problem of lung cancer classification tasks, we proposed a hybrid framework called LCGANT. Specifically, our framework contains two main parts. The first part is a lung cancer deep convolutional GAN (LCGAN) to generate synthetic lung cancer images. The second part is a regularization enhanced transfer learning model called VGG-DF to classify lung cancer images into three classes. Our framework achieves a result of 99.84% ± 0.156% (accuracy), 99.84% ± 0.153% (precision), 99.84% ± 0.156% (sensitivity), and 99.84% ± 0.156% (F1-score). The result reaches the highest performance of the dataset for the lung cancer classification task. The proposed framework resolves the overfitting problem for lung cancer classification tasks, and it achieves better performance than other state-of-the-art methods.
Collapse
|
16
|
Nolde JM, Carnagarin R, Lugo-Gavidia LM, Azzam O, Kiuchi MG, Robinson S, Mian A, Schlaich MP. Autoencoded deep features for semi-automatic, weakly supervised physiological signal labelling. Comput Biol Med 2022; 143:105294. [PMID: 35203038 DOI: 10.1016/j.compbiomed.2022.105294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 01/23/2022] [Accepted: 02/02/2022] [Indexed: 11/19/2022]
Abstract
BACKGROUND AND AIMS Machine Learning is transforming data processing in medical research and clinical practice. Missing data labels are a common limitation to training Machine Learning models. To overcome missing labels in a large dataset of microneurography recordings, a novel autoencoder based semi-supervised, iterative group-labelling methodology was developed. METHODS Autoencoders were systematically optimised to extract features from a dataset of 478621 signal excerpts from human microneurography recordings. Selected features were clusters with k-means and randomly selected representations of the corresponding original signals labelled as valid or non-valid muscle sympathetic nerve activity (MSNA) bursts in an iterative, purifying procedure by an expert rater. A deep neural network was trained based on the fully labelled dataset. RESULTS Three autoencoders, two based on fully connected neural networks and one based on convolutional neural network, were chosen for feature learning. Iterative clustering followed by labelling of complete clusters resulted in all 478621 signal peak excerpts being labelled as valid or non-valid within 13 iterations. Neural networks trained with the labelled dataset achieved, in a cross validation step with a testing dataset not included in training, on average 93.13% accuracy and 91% area under the receiver operating curve (AUC ROC). DISCUSSION The described labelling procedure enabled efficient labelling of a large dataset of physiological signal based on expert ratings. The procedure based on autoencoders may be broadly applicable to a wide range of datasets without labels that require expert input and may be utilised for Machine Learning applications if weak-labels were available.
Collapse
Affiliation(s)
- Janis M Nolde
- Dobney Hypertension Centre, Medical School - Royal Perth Hospital Unit, Royal Perth Hospital Research Foundation, The University of Western Australia, Perth, Australia
| | - Revathy Carnagarin
- Dobney Hypertension Centre, Medical School - Royal Perth Hospital Unit, Royal Perth Hospital Research Foundation, The University of Western Australia, Perth, Australia
| | - Leslie Marisol Lugo-Gavidia
- Dobney Hypertension Centre, Medical School - Royal Perth Hospital Unit, Royal Perth Hospital Research Foundation, The University of Western Australia, Perth, Australia
| | - Omar Azzam
- Dobney Hypertension Centre, Medical School - Royal Perth Hospital Unit, Royal Perth Hospital Research Foundation, The University of Western Australia, Perth, Australia
| | - Márcio Galindo Kiuchi
- Dobney Hypertension Centre, Medical School - Royal Perth Hospital Unit, Royal Perth Hospital Research Foundation, The University of Western Australia, Perth, Australia
| | - Sandi Robinson
- Dobney Hypertension Centre, Medical School - Royal Perth Hospital Unit, Royal Perth Hospital Research Foundation, The University of Western Australia, Perth, Australia
| | - Ajmal Mian
- School of Computer Science and Software Engineering, The University of Western Australia, Perth, Australia
| | - Markus P Schlaich
- Dobney Hypertension Centre, Medical School - Royal Perth Hospital Unit, Royal Perth Hospital Research Foundation, The University of Western Australia, Perth, Australia; Departments of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia; Neurovascular Hypertension & Kidney Disease Laboratory, Baker Heart and Diabetes Institute, Melbourne, Australia.
| |
Collapse
|
17
|
Dynamic image clustering from projected coordinates of deep similarity learning. Neural Netw 2022; 152:1-16. [DOI: 10.1016/j.neunet.2022.03.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2021] [Revised: 02/18/2022] [Accepted: 03/24/2022] [Indexed: 11/23/2022]
|
18
|
Saravi B, Hassel F, Ülkümen S, Zink A, Shavlokhova V, Couillard-Despres S, Boeker M, Obid P, Lang GM. Artificial Intelligence-Driven Prediction Modeling and Decision Making in Spine Surgery Using Hybrid Machine Learning Models. J Pers Med 2022; 12:jpm12040509. [PMID: 35455625 PMCID: PMC9029065 DOI: 10.3390/jpm12040509] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Revised: 03/18/2022] [Accepted: 03/19/2022] [Indexed: 12/22/2022] Open
Abstract
Healthcare systems worldwide generate vast amounts of data from many different sources. Although of high complexity for a human being, it is essential to determine the patterns and minor variations in the genomic, radiological, laboratory, or clinical data that reliably differentiate phenotypes or allow high predictive accuracy in health-related tasks. Convolutional neural networks (CNN) are increasingly applied to image data for various tasks. Its use for non-imaging data becomes feasible through different modern machine learning techniques, converting non-imaging data into images before inputting them into the CNN model. Considering also that healthcare providers do not solely use one data modality for their decisions, this approach opens the door for multi-input/mixed data models which use a combination of patient information, such as genomic, radiological, and clinical data, to train a hybrid deep learning model. Thus, this reflects the main characteristic of artificial intelligence: simulating natural human behavior. The present review focuses on key advances in machine and deep learning, allowing for multi-perspective pattern recognition across the entire information set of patients in spine surgery. This is the first review of artificial intelligence focusing on hybrid models for deep learning applications in spine surgery, to the best of our knowledge. This is especially interesting as future tools are unlikely to use solely one data modality. The techniques discussed could become important in establishing a new approach to decision-making in spine surgery based on three fundamental pillars: (1) patient-specific, (2) artificial intelligence-driven, (3) integrating multimodal data. The findings reveal promising research that already took place to develop multi-input mixed-data hybrid decision-supporting models. Their implementation in spine surgery may hence be only a matter of time.
Collapse
Affiliation(s)
- Babak Saravi
- Department of Orthopedics and Trauma Surgery, Medical Center-University of Freiburg, Faculty of Medicine, University of Freiburg, 79108 Freiburg, Germany; (S.Ü.); (P.O.); (G.M.L.)
- Department of Spine Surgery, Loretto Hospital, 79100 Freiburg, Germany; (F.H.); (A.Z.)
- Institute of Experimental Neuroregeneration, Spinal Cord Injury and Tissue Regeneration Center Salzburg (SCI-TReCS), Paracelsus Medical University, 5020 Salzburg, Austria;
- Correspondence:
| | - Frank Hassel
- Department of Spine Surgery, Loretto Hospital, 79100 Freiburg, Germany; (F.H.); (A.Z.)
| | - Sara Ülkümen
- Department of Orthopedics and Trauma Surgery, Medical Center-University of Freiburg, Faculty of Medicine, University of Freiburg, 79108 Freiburg, Germany; (S.Ü.); (P.O.); (G.M.L.)
- Department of Spine Surgery, Loretto Hospital, 79100 Freiburg, Germany; (F.H.); (A.Z.)
| | - Alisia Zink
- Department of Spine Surgery, Loretto Hospital, 79100 Freiburg, Germany; (F.H.); (A.Z.)
| | - Veronika Shavlokhova
- Department of Oral and Maxillofacial Surgery, University Hospital Heidelberg, 69120 Heidelberg, Germany;
| | - Sebastien Couillard-Despres
- Institute of Experimental Neuroregeneration, Spinal Cord Injury and Tissue Regeneration Center Salzburg (SCI-TReCS), Paracelsus Medical University, 5020 Salzburg, Austria;
- Austrian Cluster for Tissue Regeneration, 1200 Vienna, Austria
| | - Martin Boeker
- Intelligence and Informatics in Medicine, Medical Center Rechts der Isar, School of Medicine, Technical University of Munich, 81675 Munich, Germany;
| | - Peter Obid
- Department of Orthopedics and Trauma Surgery, Medical Center-University of Freiburg, Faculty of Medicine, University of Freiburg, 79108 Freiburg, Germany; (S.Ü.); (P.O.); (G.M.L.)
| | - Gernot Michael Lang
- Department of Orthopedics and Trauma Surgery, Medical Center-University of Freiburg, Faculty of Medicine, University of Freiburg, 79108 Freiburg, Germany; (S.Ü.); (P.O.); (G.M.L.)
| |
Collapse
|
19
|
|
20
|
Nolde JM, Lugo-Gavidia LM, Carnagarin R, Azzam O, Kiuchi MG, Mian A, Schlaich MP. K-means panning - Developing a new standard in automated MSNA signal recognition with a weakly supervised learning approach. Comput Biol Med 2022; 140:105087. [PMID: 34864300 DOI: 10.1016/j.compbiomed.2021.105087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 11/15/2021] [Accepted: 11/25/2021] [Indexed: 11/03/2022]
Abstract
BACKGROUND Accessibility of labelled datasets is often a key limitation for the application of Machine Learning in clinical research. A novel semi-automated weak-labelling approach based on unsupervised clustering was developed to classify a large dataset of microneurography signals and subsequently used to train a Neural Network to reproduce the labelling process. METHODS Clusters of microneurography signals were created with k-means and then labelled in terms of the validity of the signals contained in each cluster. Only purely positive or negative clusters were labelled, whereas clusters with mixed content were passed on to the next iteration of the algorithm to undergo another cycle of unsupervised clustering and labelling of the clusters. After several iterations of this process, only pure labelled clusters remained which were used to train a Deep Neural Network. RESULTS Overall, 334,548 individual signal peaks form the integrated data were extracted and more than 99.99% of the data was labelled in six iterations of this novel application of weak labelling with the help of a domain expert. A Deep Neural Network trained based on this dataset achieved consistent accuracies above 95%. DISCUSSION Data extraction and the novel iterative approach of labelling unsupervised clusters enabled creation of a large, labelled dataset combining unsupervised learning and expert ratings of signal-peaks on cluster basis in a time effective manner. Further research is needed to validate the methodology and employ it on other types of physiologic data for which it may enable efficient generation of large labelled datasets.
Collapse
Affiliation(s)
- Janis M Nolde
- Dobney Hypertension Centre, Medical School - Royal Perth Hospital Unit / Royal Perth Hospital Medical Research Foundation, University of Western Australia, Perth, Australia
| | - Leslie Marisol Lugo-Gavidia
- Dobney Hypertension Centre, Medical School - Royal Perth Hospital Unit / Royal Perth Hospital Medical Research Foundation, University of Western Australia, Perth, Australia
| | - Revathy Carnagarin
- Dobney Hypertension Centre, Medical School - Royal Perth Hospital Unit / Royal Perth Hospital Medical Research Foundation, University of Western Australia, Perth, Australia
| | - Omar Azzam
- Dobney Hypertension Centre, Medical School - Royal Perth Hospital Unit / Royal Perth Hospital Medical Research Foundation, University of Western Australia, Perth, Australia
| | - Márcio Galindo Kiuchi
- Dobney Hypertension Centre, Medical School - Royal Perth Hospital Unit / Royal Perth Hospital Medical Research Foundation, University of Western Australia, Perth, Australia
| | - Ajmal Mian
- School of Computer Science and Software Engineering, The University of Western Australia, Perth, Australia
| | - Markus P Schlaich
- Dobney Hypertension Centre, Medical School - Royal Perth Hospital Unit / Royal Perth Hospital Medical Research Foundation, University of Western Australia, Perth, Australia; Department of Cardiology and Nephrology, Royal Perth Hospital, Perth, Australia; Department of Nephrology, Royal Perth Hospital, Perth, Australia; Neurovascular Hypertension & Kidney Disease Laboratory, Baker Heart and Diabetes Institute, Melbourne, Australia.
| |
Collapse
|
21
|
Siddiqui MF, Mouna A, Nicolas G, Rahat SAA, Mitalipova A, Emmanuel N, Tashmatova N. Computational Intelligence: A Step Forward in Cancer Biomarker Discovery and Therapeutic Target Prediction. COMPUTATIONAL INTELLIGENCE IN ONCOLOGY 2022:233-250. [DOI: 10.1007/978-981-16-9221-5_14] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/24/2024]
|
22
|
Javaid A, Shahab O, Adorno W, Fernandes P, May E, Syed S. Machine Learning Predictive Outcomes Modeling in Inflammatory Bowel Diseases. Inflamm Bowel Dis 2021; 28:819-829. [PMID: 34417815 PMCID: PMC9165557 DOI: 10.1093/ibd/izab187] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Indexed: 12/14/2022]
Abstract
There is a rising interest in use of big data approaches to personalize treatment of inflammatory bowel diseases (IBDs) and to predict and prevent outcomes such as disease flares and therapeutic nonresponse. Machine learning (ML) provides an avenue to identify and quantify features across vast quantities of data to produce novel insights in disease management. In this review, we cover current approaches in ML-driven predictive outcomes modeling for IBD and relate how advances in other fields of medicine may be applied to improve future IBD predictive models. Numerous studies have incorporated clinical, laboratory, or omics data to predict significant outcomes in IBD, including hospitalizations, outpatient corticosteroid use, biologic response, and refractory disease after colectomy, among others, with considerable health care dollars saved as a result. Encouraging results in other fields of medicine support efforts to use ML image analysis-including analysis of histopathology, endoscopy, and radiology-to further advance outcome predictions in IBD. Though obstacles to clinical implementation include technical barriers, bias within data sets, and incongruence between limited data sets preventing model validation in larger cohorts, ML-predictive analytics have the potential to transform the clinical management of IBD. Future directions include the development of models that synthesize all aforementioned approaches to produce more robust predictive metrics.
Collapse
Affiliation(s)
- Aamir Javaid
- Division of Pediatric Gastroenterology and Hepatology, Department of Pediatrics, University of Virginia, Charlottesville, VA, USA
| | - Omer Shahab
- Division of Gastroenterology and Hepatology, Department of Medicine, Virginia Commonwealth University, Richmond, VA, USA
| | - William Adorno
- School of Data Science, University of Virginia, Charlottesville, VA, USA
| | - Philip Fernandes
- Division of Pediatric Gastroenterology and Hepatology, Department of Pediatrics, University of Virginia, Charlottesville, VA, USA
| | - Eve May
- Division of Gastroenterology and Hepatology, Department of Pediatrics, Children’s National Hospital, Washington, DC, USA
| | - Sana Syed
- Division of Pediatric Gastroenterology and Hepatology, Department of Pediatrics, University of Virginia, Charlottesville, VA, USA,School of Data Science, University of Virginia, Charlottesville, VA, USA,Address Correspondence to: Sana Syed, MD, MSCR, MSDS, Division of Pediatric Gastroenterology and Hepatology, Department of Pediatrics, University of Virginia, 409 Lane Rd, Room 2035B, Charlottesville, VA, 22908, USA ()
| |
Collapse
|
23
|
Mani M, Magnotta VA, Jacob M. qModeL: A plug-and-play model-based reconstruction for highly accelerated multi-shot diffusion MRI using learned priors. Magn Reson Med 2021; 86:835-851. [PMID: 33759240 PMCID: PMC8076086 DOI: 10.1002/mrm.28756] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 02/08/2021] [Accepted: 02/09/2021] [Indexed: 01/19/2023]
Abstract
PURPOSE To introduce a joint reconstruction method for highly undersampled multi-shot diffusion weighted (msDW) scans. METHODS Multi-shot EPI methods enable higher spatial resolution for diffusion MRI, but at the expense of long scan-time. Highly accelerated msDW scans are needed to enable their utilization in advanced microstructure studies, which require high q-space coverage. Previously, joint k-q undersampling methods coupled with compressed sensing were shown to enable very high acceleration factors. However, the reconstruction of this data using sparsity priors is challenging and is not suited for multi-shell data. We propose a new reconstruction that recovers images from the combined k-q data jointly. The proposed qModeL reconstruction brings together the advantages of model-based iterative reconstruction and machine learning, extending the idea of plug-and-play algorithms. Specifically, qModeL works by prelearning the signal manifold corresponding to the diffusion measurement space using deep learning. The prelearned manifold prior is incorporated into a model-based reconstruction to provide a voxel-wise regularization along the q-dimension during the joint recovery. Notably, the learning does not require in vivo training data and is derived exclusively from biophysical modeling. Additionally, a plug-and-play total variation denoising provides regularization along the spatial dimension. The proposed framework is tested on k-q undersampled single-shell and multi-shell msDW acquisition at various acceleration factors. RESULTS The qModeL joint reconstruction is shown to recover DWIs from 8-fold accelerated msDW acquisitions with error less than 5% for both single-shell and multi-shell data. Advanced microstructural analysis performed using the undersampled reconstruction also report reasonable accuracy. CONCLUSION qModeL enables the joint recovery of highly accelerated multi-shot dMRI utilizing learning-based priors. The bio-physically driven approach enables the use of accelerated multi-shot imaging for multi-shell sampling and advanced microstructure studies.
Collapse
Affiliation(s)
- Merry Mani
- Department of Radiology, University of Iowa, Iowa City, Iowa
| | | | - Mathews Jacob
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa
| |
Collapse
|