1
|
Pilehvari S, Morgan Y, Peng W. An analytical review on the use of artificial intelligence and machine learning in diagnosis, prediction, and risk factor analysis of multiple sclerosis. Mult Scler Relat Disord 2024; 89:105761. [PMID: 39018642 DOI: 10.1016/j.msard.2024.105761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 06/19/2024] [Accepted: 07/04/2024] [Indexed: 07/19/2024]
Abstract
Medical research offers potential for disease prediction, like Multiple Sclerosis (MS). This neurological disorder damages nerve cell sheaths, with treatments focusing on symptom relief. Manual MS detection is time-consuming and error prone. Though MS lesion detection has been studied, limited attention has been paid to clinical analysis and computational risk factor prediction. Artificial intelligence (AI) techniques and Machine Learning (ML) methods offer accurate and effective alternatives to mapping MS progression. However, there are challenges in accessing clinical data and interdisciplinary collaboration. By analyzing 103 papers, we recognize the trends, strengths and weaknesses of AI, ML, and statistical methods applied to MS diagnosis. AI/ML-based approaches are suggested to identify MS risk factors, select significant MS features, and improve the diagnostic accuracy, such as Rule-based Fuzzy Logic (RBFL), Adaptive Fuzzy Inference System (ANFIS), Artificial Neural Network methods (ANN), Support Vector Machine (SVM), and Bayesian Networks (BNs). Meanwhile, applications of the Expanded Disability Status Scale (EDSS) and Magnetic Resonance Imaging (MRI) can enhance MS diagnostic accuracy. By examining established risk factors like obesity, smoking, and education, some research tackled the issue of disease progression. The performance metrics varied across different aspects of MS studies: Diagnosis: Sensitivity ranged from 60 % to 98 %, specificity from 60 % to 98 %, and accuracy from 61 % to 97 %. Prediction: Sensitivity ranged from 76 % to 98 %, specificity from 65 % to 98 %, and accuracy from 62 % to 99 %. Segmentation: Accuracy ranged up to 96.7 %. Classification: Sensitivity ranged from 78 % to 97.34 %, specificity from 65 % to 99.32 %, and accuracy from 71 % to 97.94 %. Furthermore, the literature shows that combining techniques can improve efficiency, exploiting their strengths for better overall performance.
Collapse
Affiliation(s)
- Shima Pilehvari
- University of Regina, 3737 Wascana Parkway, Regina, SK, S4S 0A2, Canada
| | - Yasser Morgan
- University of Regina, 3737 Wascana Parkway, Regina, SK, S4S 0A2, Canada
| | - Wei Peng
- University of Regina, 3737 Wascana Parkway, Regina, SK, S4S 0A2, Canada.
| |
Collapse
|
2
|
Li S, Li Z, Xue K, Zhou X, Ding C, Shao Y, Zhang S, Ruan T, Zheng M, Sun J. GC-CDSS: Personalized gastric cancer treatment recommendations system based on knowledge graph. Int J Med Inform 2024; 185:105402. [PMID: 38467099 DOI: 10.1016/j.ijmedinf.2024.105402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 02/25/2024] [Accepted: 03/05/2024] [Indexed: 03/13/2024]
Abstract
BACKGROUND Gastric cancer (GC) is one of the most common malignant tumors in the world, posing a serious threat to human health. Currently, gastric cancer treatment strategies emphasize a multidisciplinary team (MDT) consultation approach. However, there are numerous treatment guidelines and insights from clinical trials. The application of AI-based Clinical Decision Support System (CDSS) in tumor diagnosis and screening is increasing rapidly. OBJECTIVE The purpose of this study is to (1) summarize the treatment decision process for GC according to the treatment guidelines in China, and then create a knowledge graph (KG) for GC, (2) based on aforementioned KG, built a CDSS and conducted an initial feasibility evaluation for the current system. METHODS Firstly, we summarized the decision-making process for treatment of GC. Then, we extracted relevant decision nodes and relationships and utilized Neo4j to create the KG. After obtaining the initial node features for building the graph embedding model, graph embedding algorithm, such as Node2Vec and GraphSAGE, were used to construct the GC-CDSS. At last, a retrospective cohort study was used to compare the consistency between GC-CDSS and MDT in treatment decision making. RESULTS In current study, we introduce a GC-CDSS, which is constructed based on Chinese GC treatment guidelines knowledge graph (KG). In the KG, we define four types of nodes and four types of relationships, and it comprise a total of 207 nodes and 300 relationships. Regarding GC-CDSS, the system is capable of providing dynamic and personalized diagnostic and treatment recommendations based on the patient's condition. Furthermore, a retrospective cohort study is conducted to compare GC-CDSS recommendations with those of the MDT group, the overall consistency rate of treatment recommendations between the auxiliary decision system and MDT team is 92.96%. CONCLUSIONS We construct a GC treatment support system, GC-CDSS, based on KG. The GC-CDSS may help oncologists make treatment decisions more efficient and promote standardization in primary healthcare settings.
Collapse
Affiliation(s)
- Shuchun Li
- Department of General Surgery, Ruijin Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai 200025, China; Shanghai Minimally Invasive Surgery Center, Ruijin Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai 200025, China
| | - Zhiang Li
- Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai 200237, China
| | - Kui Xue
- Shanghai Artificial Intelligence Laboratory, Shanghai 200232, China
| | - Xueliang Zhou
- Department of General Surgery, Ruijin Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai 200025, China; Shanghai Minimally Invasive Surgery Center, Ruijin Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai 200025, China
| | - Chengsheng Ding
- Department of General Surgery, Ruijin Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai 200025, China; Shanghai Minimally Invasive Surgery Center, Ruijin Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai 200025, China
| | - Yanfei Shao
- Department of General Surgery, Ruijin Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai 200025, China; Shanghai Minimally Invasive Surgery Center, Ruijin Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai 200025, China
| | - Sen Zhang
- Department of General Surgery, Ruijin Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai 200025, China; Shanghai Minimally Invasive Surgery Center, Ruijin Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai 200025, China
| | - Tong Ruan
- Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai 200237, China.
| | - Minhua Zheng
- Department of General Surgery, Ruijin Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai 200025, China; Shanghai Minimally Invasive Surgery Center, Ruijin Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai 200025, China.
| | - Jing Sun
- Department of General Surgery, Ruijin Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai 200025, China; Shanghai Minimally Invasive Surgery Center, Ruijin Hospital, Shanghai Jiao Tong University, School of Medicine, Shanghai 200025, China.
| |
Collapse
|
3
|
Campion JR, O'Connor DB, Lahiff C. Human-artificial intelligence interaction in gastrointestinal endoscopy. World J Gastrointest Endosc 2024; 16:126-135. [PMID: 38577646 PMCID: PMC10989254 DOI: 10.4253/wjge.v16.i3.126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 01/18/2024] [Accepted: 02/23/2024] [Indexed: 03/14/2024] Open
Abstract
The number and variety of applications of artificial intelligence (AI) in gastrointestinal (GI) endoscopy is growing rapidly. New technologies based on machine learning (ML) and convolutional neural networks (CNNs) are at various stages of development and deployment to assist patients and endoscopists in preparing for endoscopic procedures, in detection, diagnosis and classification of pathology during endoscopy and in confirmation of key performance indicators. Platforms based on ML and CNNs require regulatory approval as medical devices. Interactions between humans and the technologies we use are complex and are influenced by design, behavioural and psychological elements. Due to the substantial differences between AI and prior technologies, important differences may be expected in how we interact with advice from AI technologies. Human–AI interaction (HAII) may be optimised by developing AI algorithms to minimise false positives and designing platform interfaces to maximise usability. Human factors influencing HAII may include automation bias, alarm fatigue, algorithm aversion, learning effect and deskilling. Each of these areas merits further study in the specific setting of AI applications in GI endoscopy and professional societies should engage to ensure that sufficient emphasis is placed on human-centred design in development of new AI technologies.
Collapse
Affiliation(s)
- John R Campion
- Department of Gastroenterology, Mater Misericordiae University Hospital, Dublin D07 AX57, Ireland
- School of Medicine, University College Dublin, Dublin D04 C7X2, Ireland
| | - Donal B O'Connor
- Department of Surgery, Trinity College Dublin, Dublin D02 R590, Ireland
| | - Conor Lahiff
- Department of Gastroenterology, Mater Misericordiae University Hospital, Dublin D07 AX57, Ireland
- School of Medicine, University College Dublin, Dublin D04 C7X2, Ireland
| |
Collapse
|
4
|
Kabir MM, Mridha M, Rahman A, Hamid MA, Monowar MM. Detection of COVID-19, pneumonia, and tuberculosis from radiographs using AI-driven knowledge distillation. Heliyon 2024; 10:e26801. [PMID: 38444490 PMCID: PMC10912466 DOI: 10.1016/j.heliyon.2024.e26801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2023] [Revised: 01/30/2024] [Accepted: 02/20/2024] [Indexed: 03/07/2024] Open
Abstract
Chest radiography is an essential diagnostic tool for respiratory diseases such as COVID-19, pneumonia, and tuberculosis because it accurately depicts the structures of the chest. However, accurate detection of these diseases from radiographs is a complex task that requires the availability of medical imaging equipment and trained personnel. Conventional deep learning models offer a viable automated solution for this task. However, the high complexity of these models often poses a significant obstacle to their practical deployment within automated medical applications, including mobile apps, web apps, and cloud-based platforms. This study addresses and resolves this dilemma by reducing the complexity of neural networks using knowledge distillation techniques (KDT). The proposed technique trains a neural network on an extensive collection of chest X-ray images and propagates the knowledge to a smaller network capable of real-time detection. To create a comprehensive dataset, we have integrated three popular chest radiograph datasets with chest radiographs for COVID-19, pneumonia, and tuberculosis. Our experiments show that this knowledge distillation approach outperforms conventional deep learning methods in terms of computational complexity and performance for real-time respiratory disease detection. Specifically, our system achieves an impressive average accuracy of 0.97, precision of 0.94, and recall of 0.97.
Collapse
Affiliation(s)
- Md Mohsin Kabir
- Department of Computer Science & Engineering, Bangladesh University of Business & Technology, Dhaka-1216, Bangladesh
| | - M.F. Mridha
- Department of Computer Science, American International University-Bangladesh, Dhaka-1229, Bangladesh
| | - Ashifur Rahman
- Department of Computer Science & Engineering, Bangladesh University of Business & Technology, Dhaka-1216, Bangladesh
| | - Md. Abdul Hamid
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah-21589, Kingdom of Saudi Arabia
| | - Muhammad Mostafa Monowar
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah-21589, Kingdom of Saudi Arabia
| |
Collapse
|
5
|
Yang Y, Cai Z, Qiu S, Xu P. Vision transformer with masked autoencoders for referable diabetic retinopathy classification based on large-size retina image. PLoS One 2024; 19:e0299265. [PMID: 38446810 PMCID: PMC10917269 DOI: 10.1371/journal.pone.0299265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 02/06/2024] [Indexed: 03/08/2024] Open
Abstract
Computer-aided diagnosis systems based on deep learning algorithms have shown potential applications in rapid diagnosis of diabetic retinopathy (DR). Due to the superior performance of Transformer over convolutional neural networks (CNN) on natural images, we attempted to develop a new model to classify referable DR based on a limited number of large-size retinal images by using Transformer. Vision Transformer (ViT) with Masked Autoencoders (MAE) was applied in this study to improve the classification performance of referable DR. We collected over 100,000 publicly fundus retinal images larger than 224×224, and then pre-trained ViT on these retinal images using MAE. The pre-trained ViT was applied to classify referable DR, the performance was also compared with that of ViT pre-trained using ImageNet. The improvement in model classification performance by pre-training with over 100,000 retinal images using MAE is superior to that pre-trained with ImageNet. The accuracy, area under curve (AUC), highest sensitivity and highest specificity of the present model are 93.42%, 0.9853, 0.973 and 0.9539, respectively. This study shows that MAE can provide more flexibility to the input image and substantially reduce the number of images required. Meanwhile, the pretraining dataset scale in this study is much smaller than ImageNet, and the pre-trained weights from ImageNet are not required also.
Collapse
Affiliation(s)
- Yaoming Yang
- College of Science, China Jiliang University, Hangzhou, Zhejiang, China
| | - Zhili Cai
- College of Science, China Jiliang University, Hangzhou, Zhejiang, China
| | - Shuxia Qiu
- College of Science, China Jiliang University, Hangzhou, Zhejiang, China
- Key Laboratory of Intelligent Manufacturing Quality Big Data Tracing and Analysis of Zhejiang Province, Hangzhou, Zhejiang, China
| | - Peng Xu
- College of Science, China Jiliang University, Hangzhou, Zhejiang, China
- Key Laboratory of Intelligent Manufacturing Quality Big Data Tracing and Analysis of Zhejiang Province, Hangzhou, Zhejiang, China
| |
Collapse
|
6
|
Bu X, Zhang M, Zhang Z, Zhang Q. Computer-Aided Diagnoses for Sore Throat Based on Dynamic Uncertain Causality Graph. Diagnostics (Basel) 2023; 13:diagnostics13071219. [PMID: 37046437 PMCID: PMC10093466 DOI: 10.3390/diagnostics13071219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 03/16/2023] [Accepted: 03/20/2023] [Indexed: 04/14/2023] Open
Abstract
The causes of sore throat are complex. It can be caused by diseases of the pharynx, adjacent organs of the pharynx, or even systemic diseases. Therefore, a lack of medical knowledge and experience may cause misdiagnoses or missed diagnoses in sore throat diagnoses, especially for general practitioners in primary hospitals. This study aims to develop a computer-aided diagnostic system to assist clinicians in the differential diagnoses of sore throat. The computer-aided system is developed based on the Dynamic Uncertain Causality Graph (DUCG) theory. We cooperated with medical specialists to establish a sore throat DUCG model as the diagnostic knowledge base. The construction of the model integrates epidemiological data, knowledge, and clinical experience of medical specialists. The chain reasoning algorithm of the DUCG is used for the differential diagnoses of sore throat. The system can diagnose 27 sore throat-related diseases. The model builder initially tests it with 81 cases, and all cases are correctly diagnosed. Then the system is verified by the third-party hospital, and the diagnostic accuracy is 98%. Now, the system has been applied in hundreds of primary hospitals in Jiaozhou City, China, and the degree of recognition for doctors to the diagnostic results of the system is more than 99.9%. It is feasible to use DUCG for the differential diagnoses of sore throat, which can assist primary doctors in clinical diagnoses and the diagnostic results are acceptable to clinicians.
Collapse
Affiliation(s)
- Xusong Bu
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Mingxia Zhang
- Otorhinolaryngology Head & Neck Surgery, Xuan Wu Hospital of the Capital Medical University, Beijing 100053, China
| | - Zhan Zhang
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Qin Zhang
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
- Institute of Nuclear and New Energy Technology, Tsinghua University, Beijing 100084, China
| |
Collapse
|
7
|
OLTU B, KARACA BK, ERDEM H, ÖZGÜR A. A systematic review of transfer learning-based approaches for diabetic retinopathy detection. GAZI UNIVERSITY JOURNAL OF SCIENCE 2022. [DOI: 10.35378/gujs.1081546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Cases of diabetes and related diabetic retinopathy (DR) have been increasing at an alarming rate in modern times. Early detection of DR is an important problem since it may cause permanent blindness in the late stages. In the last two decades, many different approaches have been applied in DR detection. Reviewing academic literature shows that deep neural networks (DNNs) have become the most preferred approach for DR detection. Among these DNN approaches, Convolutional Neural Network (CNN) models are the most used ones in the field of medical image classification. Designing a new CNN architecture is a tedious and time-consuming approach. Additionally, training an enormous number of parameters is also a difficult task. Due to this reason, instead of training CNNs from scratch, using pre-trained models has been suggested in recent years as transfer learning approach. Accordingly, the present study as a review focuses on DNN and Transfer Learning based applications of DR detection considering 43 publications between 2015 and 2021. The published papers are summarized using 3 figures and 10 tables, giving information about 29 pre-trained CNN models, 13 DR data sets and standard performance metrics.
Collapse
Affiliation(s)
- Burcu OLTU
- BAŞKENT ÜNİVERSİTESİ, MÜHENDİSLİK FAKÜLTESİ
| | | | | | | |
Collapse
|
8
|
Information Extraction from the Text Data on Traditional Chinese Medicine: A Review on Tasks, Challenges, and Methods from 2010 to 2021. EVIDENCE-BASED COMPLEMENTARY AND ALTERNATIVE MEDICINE 2022; 2022:1679589. [PMID: 35600940 PMCID: PMC9122692 DOI: 10.1155/2022/1679589] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 03/31/2022] [Accepted: 04/06/2022] [Indexed: 12/12/2022]
Abstract
Background The practice of traditional Chinese medicine (TCM) began several thousand years ago, and the knowledge of practitioners is recorded in paper and electronic versions of case notes, manuscripts, and books in multiple languages. Developing a method of information extraction (IE) from these sources to generate a cohesive data set would be a great contribution to the medical field. The goal of this study was to perform a systematic review of the status of IE from TCM sources over the last 10 years. Methods We conducted a search of four literature databases for articles published from 2010 to 2021 that focused on the use of natural language processing (NLP) methods to extract information from unstructured TCM text data. Two reviewers and one adjudicator contributed to article search, article selection, data extraction, and synthesis processes. Results We retrieved 1234 records, 49 of which met our inclusion criteria. We used the articles to (i) assess the key tasks of IE in the TCM domain, (ii) summarize the challenges to extracting information from TCM text data, and (iii) identify effective frameworks, models, and key findings of TCM IE through classification. Conclusions Our analysis showed that IE from TCM text data has improved over the past decade. However, the extraction of TCM text still faces some challenges involving the lack of gold standard corpora, nonstandardized expressions, and multiple types of relations. In the future, IE work should be promoted by extracting more existing entities and relations, constructing gold standard data sets, and exploring IE methods based on a small amount of labeled data. Furthermore, fine-grained and interpretable IE technologies are necessary for further exploration.
Collapse
|
9
|
Fuhrman JD, Gorre N, Hu Q, Li H, El Naqa I, Giger ML. A review of explainable and interpretable AI with applications in COVID-19 imaging. Med Phys 2022; 49:1-14. [PMID: 34796530 PMCID: PMC8646613 DOI: 10.1002/mp.15359] [Citation(s) in RCA: 33] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 10/14/2021] [Accepted: 10/25/2021] [Indexed: 12/24/2022] Open
Abstract
The development of medical imaging artificial intelligence (AI) systems for evaluating COVID-19 patients has demonstrated potential for improving clinical decision making and assessing patient outcomes during the recent COVID-19 pandemic. These have been applied to many medical imaging tasks, including disease diagnosis and patient prognosis, as well as augmented other clinical measurements to better inform treatment decisions. Because these systems are used in life-or-death decisions, clinical implementation relies on user trust in the AI output. This has caused many developers to utilize explainability techniques in an attempt to help a user understand when an AI algorithm is likely to succeed as well as which cases may be problematic for automatic assessment, thus increasing the potential for rapid clinical translation. AI application to COVID-19 has been marred with controversy recently. This review discusses several aspects of explainable and interpretable AI as it pertains to the evaluation of COVID-19 disease and it can restore trust in AI application to this disease. This includes the identification of common tasks that are relevant to explainable medical imaging AI, an overview of several modern approaches for producing explainable output as appropriate for a given imaging scenario, a discussion of how to evaluate explainable AI, and recommendations for best practices in explainable/interpretable AI implementation. This review will allow developers of AI systems for COVID-19 to quickly understand the basics of several explainable AI techniques and assist in the selection of an approach that is both appropriate and effective for a given scenario.
Collapse
Affiliation(s)
- Jordan D. Fuhrman
- Medical Imaging and Data Resource Center (MIDRC)The University of ChicagoChicagoIllinoisUSA
- Department of RadiologyThe University of ChicagoChicagoIllinoisUSA
| | - Naveena Gorre
- Medical Imaging and Data Resource Center (MIDRC)The University of ChicagoChicagoIllinoisUSA
- Department of Machine LearningMoffitt Cancer CenterTampaFloridaUSA
| | - Qiyuan Hu
- Medical Imaging and Data Resource Center (MIDRC)The University of ChicagoChicagoIllinoisUSA
- Department of RadiologyThe University of ChicagoChicagoIllinoisUSA
| | - Hui Li
- Medical Imaging and Data Resource Center (MIDRC)The University of ChicagoChicagoIllinoisUSA
- Department of RadiologyThe University of ChicagoChicagoIllinoisUSA
| | - Issam El Naqa
- Medical Imaging and Data Resource Center (MIDRC)The University of ChicagoChicagoIllinoisUSA
- Department of Machine LearningMoffitt Cancer CenterTampaFloridaUSA
| | - Maryellen L. Giger
- Medical Imaging and Data Resource Center (MIDRC)The University of ChicagoChicagoIllinoisUSA
- Department of RadiologyThe University of ChicagoChicagoIllinoisUSA
| |
Collapse
|
10
|
Hybrid algorithm for the classification of prostate cancer patients of the MCC-Spain study based on support vector machines and genetic algorithms. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2019.08.113] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
11
|
Intelligent Disease Prediagnosis Only Based on Symptoms. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:9963576. [PMID: 34381587 PMCID: PMC8352683 DOI: 10.1155/2021/9963576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Accepted: 07/09/2021] [Indexed: 11/17/2022]
Abstract
People often concern the relationships between symptoms and diseases when seeking medical advices. In this paper, medical data are divided into three copies, records related to main disease categories, records related to subclass disease types, and records of specific diseases firstly; then two disease recognition methods only based on symptoms for the main disease category identification, subclass disease type identification, and specific disease identification are given. In the methods, a neural network and a support vector machine (SVM) algorithms are adopted, respectively. In the method validation part, accuracy of the two diagnosis methods is tested and compared. Results show that automatic disease prediction only based on symptoms is possible for intelligent medical triage and common disease diagnosis.
Collapse
|
12
|
Gong B, Shi J, Han X, Zhang H, Huang Y, Hu L, Wang J, Du J, Shi J. Diagnosis of Infantile Hip Dysplasia with B-mode Ultrasound via Two-stage Meta-learning Based Deep Exclusivity Regularized Machine. IEEE J Biomed Health Inform 2021; 26:334-344. [PMID: 34191735 DOI: 10.1109/jbhi.2021.3093649] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
The B-mode ultrasound (BUS) based computer-aided diagnosis (CAD) has shown its effectiveness for developmental dysplasia of the hip (DDH) in infants. In this work, a two-stage meta-learning based deep exclusivity regularized machine (TML-DERM) is proposed for the BUS-based CAD of DDH. TML-DERM integrates deep neural network (DNN) and exclusivity regularized machine into a unified framework to simultaneously improve the feature representation and classification performance. Moreover, the first-stage meta-learning is mainly conducted on the DNN module to alleviate the overfitting issue caused by the significantly increased parameters in DNN, and a random sampling strategy is adopted to self-generate the meta-tasks; while the second-stage meta-learning mainly learns the combination of multiple weak classifiers by a weight vector to improve the classification performance, and also optimizes the unified framework again. The experimental results on a DDH ultrasound dataset show the proposed TML-DERM achieves the superior classification performance with the mean accuracy of 85.89%, sensitivity of 86.54%, and specificity of 85.23%.
Collapse
|
13
|
Stecker IR, Freeman MS, Sitaraman S, Hall CS, Niedbalski PJ, Hendricks AJ, Martin EP, Weaver TE, Cleveland ZI. Preclinical MRI to Quantify Pulmonary Disease Severity and Trajectories in Poorly Characterized Mouse Models: A Pedagogical Example Using Data from Novel Transgenic Models of Lung Fibrosis. JOURNAL OF MAGNETIC RESONANCE OPEN 2021; 6-7. [PMID: 34414381 PMCID: PMC8372031 DOI: 10.1016/j.jmro.2021.100013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Structural remodeling in lung disease is progressive and heterogeneous, making temporally and spatially explicit information necessary to understand disease initiation and progression. While mouse models are essential to elucidate mechanistic pathways underlying disease, the experimental tools commonly available to quantify lung disease burden are typically invasive (e.g., histology). This necessitates large cross-sectional studies with terminal endpoints, which increases experimental complexity and expense. Alternatively, magnetic resonance imaging (MRI) provides information noninvasively, thus permitting robust, repeated-measures statistics. Although lung MRI is challenging due to low tissue density and rapid apparent transverse relaxation (T2* <1 ms), various imaging methods have been proposed to quantify disease burden. However, there are no widely accepted strategies for preclinical lung MRI. As such, it can be difficult for researchers who lack lung imaging expertise to design experimental protocols-particularly for novel mouse models. Here, we build upon prior work from several research groups to describe a widely applicable acquisition and analysis pipeline that can be implemented without prior preclinical pulmonary MRI experience. Our approach utilizes 3D radial ultrashort echo time (UTE) MRI with retrospective gating and lung segmentation is facilitated with a deep-learning algorithm. This pipeline was deployed to assess disease dynamics over 255 days in novel, transgenic mouse models of lung fibrosis based on disease-associated, loss-of-function mutations in Surfactant Protein-C. Previously identified imaging biomarkers (tidal volume, signal coefficient of variation, etc.) were calculated semi-automatically from these data, with an objectively-defined high signal volume identified as the most robust metric. Beyond quantifying disease dynamics, we discuss common pitfalls encountered in preclinical lung MRI and present systematic approaches to identify and mitigate these challenges. While the experimental results and specific pedagogical examples are confined to lung fibrosis, the tools and approaches presented should be broadly useful to quantify structural lung disease in a wide range of mouse models.
Collapse
Affiliation(s)
- Ian R Stecker
- Department of Biomedical Engineering, University of Cincinnati, Cincinnati, OH 45221
- Center for Pulmonary Imaging Research, Division of Pulmonary Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, OH 45229
| | - Matthew S Freeman
- Center for Pulmonary Imaging Research, Division of Pulmonary Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, OH 45229
| | - Sneha Sitaraman
- Division of Neonatology and Pulmonary Biology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH 45229
| | - Chase S Hall
- Division of Pulmonary and Critical Care, University of Kansas Medical Center, Kansas City, KS 66160
| | - Peter J Niedbalski
- Center for Pulmonary Imaging Research, Division of Pulmonary Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, OH 45229
- Division of Pulmonary and Critical Care, University of Kansas Medical Center, Kansas City, KS 66160
| | - Alexandra J Hendricks
- Department of Biomedical Engineering, University of Cincinnati, Cincinnati, OH 45221
- Center for Pulmonary Imaging Research, Division of Pulmonary Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, OH 45229
| | - Emily P Martin
- Division of Neonatology and Pulmonary Biology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH 45229
| | - Timothy E Weaver
- Department of Pediatrics, University of Cincinnati, Cincinnati, OH 45221
- Division of Neonatology and Pulmonary Biology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH 45229
| | - Zackary I Cleveland
- Department of Biomedical Engineering, University of Cincinnati, Cincinnati, OH 45221
- Center for Pulmonary Imaging Research, Division of Pulmonary Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, OH 45229
- Department of Pediatrics, University of Cincinnati, Cincinnati, OH 45221
| |
Collapse
|
14
|
Shen YT, Chen L, Yue WW, Xu HX. Artificial intelligence in ultrasound. Eur J Radiol 2021; 139:109717. [PMID: 33962110 DOI: 10.1016/j.ejrad.2021.109717] [Citation(s) in RCA: 68] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/28/2021] [Accepted: 04/11/2021] [Indexed: 12/13/2022]
Abstract
Ultrasound (US), a flexible green imaging modality, is expanding globally as a first-line imaging technique in various clinical fields following with the continual emergence of advanced ultrasonic technologies and the well-established US-based digital health system. Actually, in US practice, qualified physicians should manually collect and visually evaluate images for the detection, identification and monitoring of diseases. The diagnostic performance is inevitably reduced due to the intrinsic property of high operator-dependence from US. In contrast, artificial intelligence (AI) excels at automatically recognizing complex patterns and providing quantitative assessment for imaging data, showing high potential to assist physicians in acquiring more accurate and reproducible results. In this article, we will provide a general understanding of AI, machine learning (ML) and deep learning (DL) technologies; We then review the rapidly growing applications of AI-especially DL technology in the field of US-based on the following anatomical regions: thyroid, breast, abdomen and pelvis, obstetrics heart and blood vessels, musculoskeletal system and other organs by covering image quality control, anatomy localization, object detection, lesion segmentation, and computer-aided diagnosis and prognosis evaluation; Finally, we offer our perspective on the challenges and opportunities for the clinical practice of biomedical AI systems in US.
Collapse
Affiliation(s)
- Yu-Ting Shen
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China
| | - Liang Chen
- Department of Gastroenterology, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, 200072, PR China
| | - Wen-Wen Yue
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| | - Hui-Xiong Xu
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| |
Collapse
|
15
|
Zhao Y, Rhee DJ, Cardenas C, Court LE, Yang J. Training deep-learning segmentation models from severely limited data. Med Phys 2021; 48:1697-1706. [PMID: 33474727 PMCID: PMC8058262 DOI: 10.1002/mp.14728] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 01/07/2021] [Accepted: 01/13/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE To enable generation of high-quality deep learning segmentation models from severely limited contoured cases (e.g., ~10 cases). METHODS Thirty head and neck computed tomography (CT) scans with well-defined contours were deformably registered to 200 CT scans of the same anatomic site without contours. Acquired deformation vector fields were used to train a principal component analysis (PCA) model for each of the 30 contoured CT scans by capturing the mean deformation and most prominent variations. Each PCA model can produce an infinite number of synthetic CT scans and corresponding contours by applying random deformations. We used 300, 600, 1000, and 2000 synthetic CT scans and contours generated from one PCA model to train V-Net, a 3D convolutional neural network architecture, to segment parotid and submandibular glands. We repeated the training using same numbers of training cases generated from 7, 10, 20, and 30 PCA models, with the data distributed evenly between each PCA model. Performance of the segmentation models was evaluated with Dice similarity coefficients between auto-generated contours and physician-drawn contours on 162 test CT scans for parotid glands and another 21 test CT scans for submandibular glands. RESULTS Dice values varied with the number of synthetic CT scans and the number of PCA models used to train the network. By using 2000 synthetic CT scans generated from 10 PCA models, we achieved Dice values of 82.8% ± 6.8% for right parotid, 82.0% ± 6.9% for left parotid, and 74.2% ± 6.8% for submandibular glands. These results are comparable with those obtained from state-of-the-art auto-contouring approaches, including a deep learning network trained from more than 1000 contoured patients and a multi-atlas algorithm from 12 well-contoured atlases. Improvement was marginal when >10 PCA models or >2000 synthetic CT scans were used. CONCLUSIONS We demonstrated an effective data augmentation approach to train high-quality deep learning segmentation models from a limited number of well-contoured patient cases.
Collapse
Affiliation(s)
- Yao Zhao
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX
- The University of Texas MD Anderson Graduate School of Biomedical Science, Houston, TX
| | - Dong Joo Rhee
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX
- The University of Texas MD Anderson Graduate School of Biomedical Science, Houston, TX
| | - Carlos Cardenas
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Laurence E. Court
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Jinzhong Yang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX
| |
Collapse
|
16
|
Ahmadi SA, Vivar G, Navab N, Möhwald K, Maier A, Hadzhikolev H, Brandt T, Grill E, Dieterich M, Jahn K, Zwergal A. Modern machine-learning can support diagnostic differentiation of central and peripheral acute vestibular disorders. J Neurol 2020; 267:143-152. [PMID: 32529578 PMCID: PMC7718180 DOI: 10.1007/s00415-020-09931-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 05/15/2020] [Accepted: 05/19/2020] [Indexed: 11/25/2022]
Abstract
BACKGROUND Diagnostic classification of central vs. peripheral etiologies in acute vestibular disorders remains a challenge in the emergency setting. Novel machine-learning methods may help to support diagnostic decisions. In the current study, we tested the performance of standard and machine-learning approaches in the classification of consecutive patients with acute central or peripheral vestibular disorders. METHODS 40 Patients with vestibular stroke (19 with and 21 without acute vestibular syndrome (AVS), defined by the presence of spontaneous nystagmus) and 68 patients with peripheral AVS due to vestibular neuritis were recruited in the emergency department, in the context of the prospective EMVERT trial (EMergency VERTigo). All patients received a standardized neuro-otological examination including videooculography and posturography in the acute symptomatic stage and an MRI within 7 days after symptom onset. Diagnostic performance of state-of-the-art scores, such as HINTS (Head Impulse, gaze-evoked Nystagmus, Test of Skew) and ABCD2 (Age, Blood, Clinical features, Duration, Diabetes), for the differentiation of vestibular stroke vs. peripheral AVS was compared to various machine-learning approaches: (i) linear logistic regression (LR), (ii) non-linear random forest (RF), (iii) artificial neural network, and (iv) geometric deep learning (Single/MultiGMC). A prospective classification was simulated by ten-fold cross-validation. We analyzed whether machine-estimated feature importances correlate with clinical experience. RESULTS Machine-learning methods (e.g., MultiGMC) outperform univariate scores, such as HINTS or ABCD2, for differentiation of all vestibular strokes vs. peripheral AVS (MultiGMC area-under-the-curve (AUC): 0.96 vs. HINTS/ABCD2 AUC: 0.71/0.58). HINTS performed similarly to MultiGMC for vestibular stroke with AVS (AUC: 0.86), but more poorly for vestibular stroke without AVS (AUC: 0.54). Machine-learning models learn to put different weights on particular features, each of which is relevant from a clinical viewpoint. Established non-linear machine-learning methods like RF and linear methods like LR are less powerful classification models (AUC: 0.89 vs. 0.62). CONCLUSIONS Established clinical scores (such as HINTS) provide a valuable baseline assessment for stroke detection in acute vestibular syndromes. In addition, machine-learning methods may have the potential to increase sensitivity and selectivity in the establishment of a correct diagnosis.
Collapse
Affiliation(s)
- Seyed-Ahmad Ahmadi
- German Center for Vertigo and Balance Disorders, Ludwig-Maximilians-University, Munich, Germany
- Computer Aided Medical Procedures, Technical University, Munich, Germany
| | - Gerome Vivar
- German Center for Vertigo and Balance Disorders, Ludwig-Maximilians-University, Munich, Germany
- Computer Aided Medical Procedures, Technical University, Munich, Germany
| | - Nassir Navab
- Computer Aided Medical Procedures, Technical University, Munich, Germany
| | - Ken Möhwald
- German Center for Vertigo and Balance Disorders, Ludwig-Maximilians-University, Munich, Germany
- Department of Neurology, Ludwig-Maximilians-University, Marchioninistrasse 15, 81377, Munich, Germany
| | - Andreas Maier
- German Center for Vertigo and Balance Disorders, Ludwig-Maximilians-University, Munich, Germany
- Department of Neurology, Ludwig-Maximilians-University, Marchioninistrasse 15, 81377, Munich, Germany
| | - Hristo Hadzhikolev
- German Center for Vertigo and Balance Disorders, Ludwig-Maximilians-University, Munich, Germany
- Department of Neurology, Ludwig-Maximilians-University, Marchioninistrasse 15, 81377, Munich, Germany
| | - Thomas Brandt
- German Center for Vertigo and Balance Disorders, Ludwig-Maximilians-University, Munich, Germany
- Clinical Neurosciences, Ludwig-Maximilians-University, Munich, Germany
| | - Eva Grill
- German Center for Vertigo and Balance Disorders, Ludwig-Maximilians-University, Munich, Germany
- Institute for Medical Information Processing, Ludwig-Maximilians-University, Biometry, and Epidemiology, Munich, Germany
| | - Marianne Dieterich
- German Center for Vertigo and Balance Disorders, Ludwig-Maximilians-University, Munich, Germany
- Department of Neurology, Ludwig-Maximilians-University, Marchioninistrasse 15, 81377, Munich, Germany
- Munich Cluster of Systems Neurology, SyNergy, Munich, Germany
| | - Klaus Jahn
- German Center for Vertigo and Balance Disorders, Ludwig-Maximilians-University, Munich, Germany
- Department of Neurology, Schön Klinik Bad Aibling, Munich, Germany
| | - Andreas Zwergal
- German Center for Vertigo and Balance Disorders, Ludwig-Maximilians-University, Munich, Germany.
- Department of Neurology, Ludwig-Maximilians-University, Marchioninistrasse 15, 81377, Munich, Germany.
| |
Collapse
|
17
|
Using Patent Technology Networks to Observe Neurocomputing Technology Hotspots and Development Trends. SUSTAINABILITY 2020. [DOI: 10.3390/su12187696] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In recent years, development in the fields of big data and artificial intelligence has given rise to interest among scholars in neurocomputing-related applications. Neurocomputing has relatively widespread applications because it is a critical technology in numerous fields. However, most studies on neurocomputing have focused on improving related algorithms or application fields; they have failed to highlight the main technology hotspots and development trends from a comprehensive viewpoint. To fill the research gap, this study adopts a new viewpoint and employs technological fields as its main subject. Neurocomputing patents are subjected to network analysis to construct a neurocomputing technology hotspot. The results reveal that the neurocomputing technology hotspots are algorithms, methods or devices for reading or recognizing printed or written characters or patterns, and digital storage characterized by the use of particular electric or magnetic storage elements. Furthermore, the technology hotspots are discovered to not be clustered around particular fields but, rather, are multidisciplinary. The applications that combine neurocomputing with digital storage are currently undergoing the most extensive development. Finally, patentee analysis reveal that neurocomputing technology is mainly being developed by information technology corporations, thereby indicating the market development potential of neurocomputing technology. This study constructs a technology hotspot network model to elucidate the trend in development of neurocomputing technology, and the findings may serve as a reference for industries planning to promote emerging technologies.
Collapse
|
18
|
Radiomics-Based Prediction of Overall Survival in Lung Cancer Using Different Volumes-Of-Interest. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10186425] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Lung cancer accounts for the largest amount of deaths worldwide with respect to the other oncological pathologies. To guarantee the most effective cure to patients for such aggressive tumours, radiomics is increasing as a novel and promising research field that aims at extracting knowledge from data in terms of quantitative measures that are computed from diagnostic images, with prognostic and predictive ends. This knowledge could be used to optimize current treatments and to maximize their efficacy. To this end, we hereby study the use of such quantitative biomarkers computed from CT images of patients affected by Non-Small Cell Lung Cancer to predict Overall Survival. The main contributions of this work are two: first, we consider different volumes of interest for the same patient to find out whether the volume surrounding the visible lesions can provide useful information; second, we introduce 3D Local Binary Patterns, which are texture measures scarcely explored in radiomics. As further validation, we show that the proposed signature outperforms not only the features automatically computed by a deep learning-based approach, but also another signature at the state-of-the-art using other handcrafted features.
Collapse
|
19
|
Martins SB, Telea AC, Falcão AX. Investigating the impact of supervoxel segmentation for unsupervised abnormal brain asymmetry detection. Comput Med Imaging Graph 2020; 85:101770. [PMID: 32854021 DOI: 10.1016/j.compmedimag.2020.101770] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 07/27/2020] [Accepted: 07/31/2020] [Indexed: 11/26/2022]
Abstract
Several brain disorders are associated with abnormal brain asymmetries (asymmetric anomalies). Several computer-based methods aim to detect such anomalies automatically. Recent advances in this area use automatic unsupervised techniques that extract pairs of symmetric supervoxels in the hemispheres, model normal brain asymmetries for each pair from healthy subjects, and treat outliers as anomalies. Yet, there is no deep understanding of the impact of the supervoxel segmentation quality for abnormal asymmetry detection, especially for small anomalies, nor of the added value of using a specialized model for each supervoxel pair instead of a single global appearance model. We aim to answer these questions by a detailed evaluation of different scenarios for supervoxel segmentation and classification for detecting abnormal brain asymmetries. Experimental results on 3D MR-T1 brain images of stroke patients confirm the importance of high-quality supervoxels fit anomalies and the use of a specific classifier for each supervoxel. Next, we present a refinement of the detection method that reduces the number of false-positive supervoxels, thereby making the detection method easier to use for visual inspection and analysis of the found anomalies.
Collapse
Affiliation(s)
- Samuel B Martins
- Laboratory of Image Data Science (LIDS), Institute of Computing, University of Campinas, Brazil; Bernoulli Institute, University of Groningen, The Netherlands; Federal Institute of São Paulo, Campinas, Brazil
| | - Alexandru C Telea
- Department of Information and Computing Sciences, Utrecht University, The Netherlands
| | - Alexandre X Falcão
- Laboratory of Image Data Science (LIDS), Institute of Computing, University of Campinas, Brazil
| |
Collapse
|
20
|
A Survey of Deep-Learning Applications in Ultrasound: Artificial Intelligence-Powered Ultrasound for Improving Clinical Workflow. J Am Coll Radiol 2020; 16:1318-1328. [PMID: 31492410 DOI: 10.1016/j.jacr.2019.06.004] [Citation(s) in RCA: 126] [Impact Index Per Article: 31.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 05/31/2019] [Accepted: 06/03/2019] [Indexed: 02/07/2023]
Abstract
Ultrasound is the most commonly used imaging modality in clinical practice because it is a nonionizing, low-cost, and portable point-of-care imaging tool that provides real-time images. Artificial intelligence (AI)-powered ultrasound is becoming more mature and getting closer to routine clinical applications in recent times because of an increased need for efficient and objective acquisition and evaluation of ultrasound images. Because ultrasound images involve operator-, patient-, and scanner-dependent variations, the adaptation of classical machine learning methods to clinical applications becomes challenging. With their self-learning ability, deep-learning (DL) methods are able to harness exponentially growing graphics processing unit computing power to identify abstract and complex imaging features. This has given rise to tremendous opportunities such as providing robust and generalizable AI models for improving image acquisition, real-time assessment of image quality, objective diagnosis and detection of diseases, and optimizing ultrasound clinical workflow. In this report, the authors review current DL approaches and research directions in rapidly advancing ultrasound technology and present their outlook on future directions and trends for DL techniques to further improve diagnosis, reduce health care cost, and optimize ultrasound clinical workflow.
Collapse
|
21
|
Wani IM, Arora S. Computer-aided diagnosis systems for osteoporosis detection: a comprehensive survey. Med Biol Eng Comput 2020; 58:1873-1917. [PMID: 32583141 DOI: 10.1007/s11517-020-02171-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 03/26/2020] [Indexed: 12/18/2022]
Abstract
Computer-aided diagnosis (CAD) has revolutionized the field of medical diagnosis. They assist in improving the treatment potentials and intensify the survival frequency by early diagnosing the diseases in an efficient, timely, and cost-effective way. The automatic segmentation has led the radiologist to successfully segment the region of interest to improve the diagnosis of diseases from medical images which is not so efficiently possible by manual segmentation. The aim of this paper is to survey the vision-based CAD systems especially focusing on the segmentation techniques for the pathological bone disease known as osteoporosis. Osteoporosis is the state of the bones where the mineral density of bones decreases and they become porous, making the bones easily susceptible to fractures by small injury or a fall. The article covers the image acquisition techniques for acquiring the medical images for osteoporosis diagnosis. The article also discusses the advanced machine learning paradigms employed in segmentation for osteoporosis disease. Other image processing steps in osteoporosis like feature extraction and classification are also briefly described. Finally, the paper gives the future directions to improve the osteoporosis diagnosis and presents the proposed architecture. Graphical abstract.
Collapse
Affiliation(s)
- Insha Majeed Wani
- School of Computer Science and Engineering, SMVDU, Katra, J&K, India
| | - Sakshi Arora
- School of Computer Science and Engineering, SMVDU, Katra, J&K, India.
| |
Collapse
|
22
|
Yamanakkanavar N, Choi JY, Lee B. MRI Segmentation and Classification of Human Brain Using Deep Learning for Diagnosis of Alzheimer's Disease: A Survey. SENSORS (BASEL, SWITZERLAND) 2020; 20:E3243. [PMID: 32517304 PMCID: PMC7313699 DOI: 10.3390/s20113243] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 05/25/2020] [Accepted: 06/03/2020] [Indexed: 02/07/2023]
Abstract
Many neurological diseases and delineating pathological regions have been analyzed, and the anatomical structure of the brain researched with the aid of magnetic resonance imaging (MRI). It is important to identify patients with Alzheimer's disease (AD) early so that preventative measures can be taken. A detailed analysis of the tissue structures from segmented MRI leads to a more accurate classification of specific brain disorders. Several segmentation methods to diagnose AD have been proposed with varying complexity. Segmentation of the brain structure and classification of AD using deep learning approaches has gained attention as it can provide effective results over a large set of data. Hence, deep learning methods are now preferred over state-of-the-art machine learning methods. We aim to provide an outline of current deep learning-based segmentation approaches for the quantitative analysis of brain MRI for the diagnosis of AD. Here, we report how convolutional neural network architectures are used to analyze the anatomical brain structure and diagnose AD, discuss how brain MRI segmentation improves AD classification, describe the state-of-the-art approaches, and summarize their results using publicly available datasets. Finally, we provide insight into current issues and discuss possible future research directions in building a computer-aided diagnostic system for AD.
Collapse
Affiliation(s)
- Nagaraj Yamanakkanavar
- Department of Information and Communications Engineering, Chosun University, Gwangju 61452, Korea;
| | - Jae Young Choi
- Division of Computer & Electronic Systems Engineering, Hankuk University of Foreign Studies, Yongin 17035, Korea;
| | - Bumshik Lee
- Department of Information and Communications Engineering, Chosun University, Gwangju 61452, Korea;
| |
Collapse
|
23
|
Gudigar A, Raghavendra U, Hegde A, Kalyani M, Ciaccio EJ, Rajendra Acharya U. Brain pathology identification using computer aided diagnostic tool: A systematic review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 187:105205. [PMID: 31786457 DOI: 10.1016/j.cmpb.2019.105205] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2019] [Revised: 11/12/2019] [Accepted: 11/12/2019] [Indexed: 05/28/2023]
Abstract
Computer aided diagnostic (CAD) has become a significant tool in expanding patient quality-of-life by reducing human errors in diagnosis. CAD can expedite decision-making on complex clinical data automatically. Since brain diseases can be fatal, rapid identification of brain pathology to prolong patient life is an important research topic. Many algorithms have been proposed for efficient brain pathology identification (BPI) over the past decade. Constant refinement of the various image processing algorithms must take place to expand performance of the automatic BPI task. In this paper, a systematic survey of contemporary BPI algorithms using brain magnetic resonance imaging (MRI) is presented. A summarization of recent literature provides investigators with a helpful synopsis of the domain. Furthermore, to enhance the performance of BPI, future research directions are indicated.
Collapse
Affiliation(s)
- Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - U Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India.
| | - Ajay Hegde
- Neurosurgery, Institute of Neurological Sciences, NHS Greater Glasgow and Clyde, Glasgow, United Kingdom
| | - M Kalyani
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Edward J Ciaccio
- Department of Medicine, Columbia University Medical Center, New York, United States
| | - U Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Clementi 599489, Singapore; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Clementi 599491, Singapore; International Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto, Japan
| |
Collapse
|
24
|
Brunese L, Mercaldo F, Reginelli A, Santone A. An ensemble learning approach for brain cancer detection exploiting radiomic features. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 185:105134. [PMID: 31675644 DOI: 10.1016/j.cmpb.2019.105134] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Revised: 09/27/2019] [Accepted: 10/15/2019] [Indexed: 05/03/2023]
Abstract
BACKGROUND AND OBJECTIVE The brain cancer is one of the most aggressive tumour: the 70% of the patients diagnosed with this malignant cancer will not survive. Early detection of brain tumours can be fundamental to increase survival rates. The brain cancers are classified into four different grades (i.e., I, II, III and IV) according to how normal or abnormal the brain cells look. The following work aims to recognize the different brain cancer grades by analysing brain magnetic resonance images. METHODS A method to identify the components of an ensemble learner is proposed. The ensemble learner is focused on the discrimination between different brain cancer grades using non invasive radiomic features. The considered radiomic features are belonging to five different groups: First Order, Shape, Gray Level Co-occurrence Matrix, Gray Level Run Length Matrix and Gray Level Size Zone Matrix. We evaluate the features effectiveness through hypothesis testing and through decision boundaries, performance analysis and calibration plots thus we select the best candidate classifiers for the ensemble learner. RESULTS We evaluate the proposed method with 111,205 brain magnetic resonances belonging to two freely available data-sets for research purposes. The results are encouraging: we obtain an accuracy of 99% for the benign grade I and the II, III and IV malignant brain cancer detection. CONCLUSION The experimental results confirm that the ensemble learner designed with the proposed method outperforms the current state-of-the-art approaches in brain cancer grade detection starting from magnetic resonance images.
Collapse
Affiliation(s)
- Luca Brunese
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Francesco Mercaldo
- Institute for Informatics and Telematics, National Research Council of Italy (CNR), Pisa, Italy; Department of Biosciences and Territory, University of Molise, Pesche (IS), Italy.
| | - Alfonso Reginelli
- Department of Precision Medicine, University of Campania "Luigi Vanvitelli", Napoli, Italy
| | - Antonella Santone
- Department of Biosciences and Territory, University of Molise, Pesche (IS), Italy
| |
Collapse
|
25
|
Stetter BJ, Krafft FC, Ringhof S, Stein T, Sell S. A Machine Learning and Wearable Sensor Based Approach to Estimate External Knee Flexion and Adduction Moments During Various Locomotion Tasks. Front Bioeng Biotechnol 2020; 8:9. [PMID: 32039192 PMCID: PMC6993119 DOI: 10.3389/fbioe.2020.00009] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Accepted: 01/07/2020] [Indexed: 11/13/2022] Open
Abstract
Joint moment measurements represent an objective biomechanical parameter of knee joint load in knee osteoarthritis (KOA). Wearable sensors in combination with machine learning techniques may provide solutions to develop assistive devices in KOA patients to improve disease treatment and to minimize risk of non-functional overreaching (e.g., pain). The purpose of this study was to develop an artificial neural network (ANN) that estimates external knee flexion moments (KFM) and external knee adduction moments (KAM) during various locomotion tasks, based on data obtained by two wearable sensors. Thirteen participants were instrumented with two inertial measurement units (IMUs) located on the right thigh and shank. Participants performed six different locomotion tasks consisting of linear motions and motions with a change of direction, while IMU signals as well as full body kinematics and ground reaction forces were synchronously recorded. KFM and KAM were determined using a full body biomechanical model. An ANN was trained to estimate the KFM and KAM time series using the IMU signals as input. Evaluation of the ANN was done using a leave-one-subject-out cross-validation. Concordance of the ANN-estimated KFM and reference data was categorized for five tasks (walking straight, 90° walking turn, moderate running, 90° running turn and 45° cutting maneuver) as strong (r ≥ 0.69, rRMSE ≤ 23.1) and as moderate for fast running (r = 0.65 ± 0.43, rRMSE = 25.5 ± 7.0%). For all locomotion tasks, KAM yielded a lower concordance in comparison to the KFM, ranging from weak (r ≤ 0.21, rRMSE ≥ 33.8%) in cutting and fast running to strong (r = 0.71 ± 0.26, rRMSE = 22.3 ± 8.3%) for walking straight. Smallest mean difference of classical discrete load metrics was seen for KFM impulse, 10.6 ± 47.0%. The results demonstrate the feasibility of using only two IMUs to estimate KFM and KAM to a limited extent. This methodological step facilitates further work that should aim to improve the estimation accuracy to provide valuable biofeedback systems for KOA patients. Greater accuracy of effective implementation could be achieved by a participant- or task-specific ANN modeling.
Collapse
Affiliation(s)
- Bernd J Stetter
- Institute of Sports and Sports Science, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Frieder C Krafft
- Institute of Sports and Sports Science, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Steffen Ringhof
- Institute of Sports and Sports Science, Karlsruhe Institute of Technology, Karlsruhe, Germany.,Department of Sport and Sport Science, University of Freiburg, Freiburg, Germany
| | - Thorsten Stein
- Institute of Sports and Sports Science, Karlsruhe Institute of Technology, Karlsruhe, Germany
| | - Stefan Sell
- Institute of Sports and Sports Science, Karlsruhe Institute of Technology, Karlsruhe, Germany.,Joint Center Black Forest, Hospital Neuenbuerg, Neuenbuerg, Germany
| |
Collapse
|
26
|
Diabetic retinopathy detection through deep learning techniques: A review. INFORMATICS IN MEDICINE UNLOCKED 2020. [DOI: 10.1016/j.imu.2020.100377] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
27
|
Svenson P, Haralabopoulos G, Torres Torres M. Sepsis Deterioration Prediction Using Channelled Long Short-Term Memory Networks. Artif Intell Med 2020. [DOI: 10.1007/978-3-030-59137-3_32] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
28
|
Yan E, Song J, Liu C, Luan J, Hong W. Comparison of support vector machine, back propagation neural network and extreme learning machine for syndrome element differentiation. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09738-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
29
|
Automated Segmentation Methods of Drusen to Diagnose Age-Related Macular Degeneration Screening in Retinal Images. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2018; 2018:6084798. [PMID: 29721037 PMCID: PMC5867666 DOI: 10.1155/2018/6084798] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Revised: 01/13/2018] [Accepted: 02/06/2018] [Indexed: 11/18/2022]
Abstract
Existing drusen measurement is difficult to use in clinic because it requires a lot of time and effort for visual inspection. In order to resolve this problem, we propose an automatic drusen detection method to help clinical diagnosis of age-related macular degeneration. First, we changed the fundus image to a green channel and extracted the ROI of the macular area based on the optic disk. Next, we detected the candidate group using the difference image of the median filter within the ROI. We also segmented vessels and removed them from the image. Finally, we detected the drusen through Renyi's entropy threshold algorithm. We performed comparisons and statistical analysis between the manual detection results and automatic detection results for 30 cases in order to verify validity. As a result, the average sensitivity was 93.37% (80.95%~100%) and the average DSC was 0.73 (0.3~0.98). In addition, the value of the ICC was 0.984 (CI: 0.967~0.993, p < 0.01), showing the high reliability of the proposed automatic method. We expect that the automatic drusen detection helps clinicians to improve the diagnostic performance in the detection of drusen on fundus image.
Collapse
|
30
|
Vellido A, Ribas V, Morales C, Ruiz Sanmartín A, Ruiz Rodríguez JC. Machine learning in critical care: state-of-the-art and a sepsis case study. Biomed Eng Online 2018; 17:135. [PMID: 30458795 PMCID: PMC6245501 DOI: 10.1186/s12938-018-0569-2] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Like other scientific fields, such as cosmology, high-energy physics, or even the life sciences, medicine and healthcare face the challenge of an extremely quick transformation into data-driven sciences. This challenge entails the daunting task of extracting usable knowledge from these data using algorithmic methods. In the medical context this may for instance realized through the design of medical decision support systems for diagnosis, prognosis and patient management. The intensive care unit (ICU), and by extension the whole area of critical care, is becoming one of the most data-driven clinical environments. RESULTS The increasing availability of complex and heterogeneous data at the point of patient attention in critical care environments makes the development of fresh approaches to data analysis almost compulsory. Computational Intelligence (CI) and Machine Learning (ML) methods can provide such approaches and have already shown their usefulness in addressing problems in this context. The current study has a dual goal: it is first a review of the state-of-the-art on the use and application of such methods in the field of critical care. Such review is presented from the viewpoint of the different subfields of critical care, but also from the viewpoint of the different available ML and CI techniques. The second goal is presenting a collection of results that illustrate the breath of possibilities opened by ML and CI methods using a single problem, the investigation of septic shock at the ICU. CONCLUSION We have presented a structured state-of-the-art that illustrates the broad-ranging ways in which ML and CI methods can make a difference in problems affecting the manifold areas of critical care. The potential of ML and CI has been illustrated in detail through an example concerning the sepsis pathology. The new definitions of sepsis and the relevance of using the systemic inflammatory response syndrome (SIRS) in its diagnosis have been considered. Conditional independence models have been used to address this problem, showing that SIRS depends on both organ dysfunction measured through the Sequential Organ Failure (SOFA) score and the ICU outcome, thus concluding that SIRS should still be considered in the study of the pathophysiology of Sepsis. Current assessment of the risk of dead at the ICU lacks specificity. ML and CI techniques are shown to improve the assessment using both indicators already in place and other clinical variables that are routinely measured. Kernel methods in particular are shown to provide the best performance balance while being amenable to representation through graphical models, which increases their interpretability and, with it, their likelihood to be accepted in medical practice.
Collapse
Affiliation(s)
- Alfredo Vellido
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center, Universitat Politècnica de Catalunya, C. Jordi Girona, 1-3, 08034, Barcelona, Spain. .,Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona, Spain.
| | - Vicent Ribas
- Data Analytics in Medicine, EureCat, Avinguda Diagonal, 177, 08018, Barcelona, Spain
| | - Carles Morales
- Intelligent Data Science and Artificial Intelligence (IDEAI) Research Center, Universitat Politècnica de Catalunya, C. Jordi Girona, 1-3, 08034, Barcelona, Spain
| | - Adolfo Ruiz Sanmartín
- Critical Care Deparment, Vall d'Hebron University Hospital. Shock, Organ Dysfunction and Resuscitation (SODIR) Research Group, Vall d' Hebron Research Institute (VHIR), Universitat Autònoma de Barcelona, 08035, Barcelona, Spain
| | - Juan Carlos Ruiz Rodríguez
- Critical Care Deparment, Vall d'Hebron University Hospital. Shock, Organ Dysfunction and Resuscitation (SODIR) Research Group, Vall d' Hebron Research Institute (VHIR), Universitat Autònoma de Barcelona, 08035, Barcelona, Spain
| |
Collapse
|
31
|
Adaptive network based fuzzy inference system (ANFIS) training approaches: a comprehensive survey. Artif Intell Rev 2018. [DOI: 10.1007/s10462-017-9610-2] [Citation(s) in RCA: 118] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
32
|
Akkus Z, Galimzianova A, Hoogi A, Rubin DL, Erickson BJ. Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions. J Digit Imaging 2017; 30:449-459. [PMID: 28577131 PMCID: PMC5537095 DOI: 10.1007/s10278-017-9983-4] [Citation(s) in RCA: 451] [Impact Index Per Article: 64.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.
Collapse
Affiliation(s)
- Zeynettin Akkus
- Radiology Informatics Lab, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Alfiia Galimzianova
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Assaf Hoogi
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Daniel L Rubin
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Bradley J Erickson
- Radiology Informatics Lab, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| |
Collapse
|