1
|
Interpretable Machine Learning Techniques in ECG-Based Heart Disease Classification: A Systematic Review. Diagnostics (Basel) 2022; 13:diagnostics13010111. [PMID: 36611403 PMCID: PMC9818170 DOI: 10.3390/diagnostics13010111] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/22/2022] [Accepted: 12/23/2022] [Indexed: 12/31/2022] Open
Abstract
Heart disease is one of the leading causes of mortality throughout the world. Among the different heart diagnosis techniques, an electrocardiogram (ECG) is the least expensive non-invasive procedure. However, the following are challenges: the scarcity of medical experts, the complexity of ECG interpretations, the manifestation similarities of heart disease in ECG signals, and heart disease comorbidity. Machine learning algorithms are viable alternatives to the traditional diagnoses of heart disease from ECG signals. However, the black box nature of complex machine learning algorithms and the difficulty in explaining a model's outcomes are obstacles for medical practitioners in having confidence in machine learning models. This observation paves the way for interpretable machine learning (IML) models as diagnostic tools that can build a physician's trust and provide evidence-based diagnoses. Therefore, in this systematic literature review, we studied and analyzed the research landscape in interpretable machine learning techniques by focusing on heart disease diagnosis from an ECG signal. In this regard, the contribution of our work is manifold; first, we present an elaborate discussion on interpretable machine learning techniques. In addition, we identify and characterize ECG signal recording datasets that are readily available for machine learning-based tasks. Furthermore, we identify the progress that has been achieved in ECG signal interpretation using IML techniques. Finally, we discuss the limitations and challenges of IML techniques in interpreting ECG signals.
Collapse
|
2
|
Streeb D, Metz Y, Schlegel U, Schneider B, El-Assady M, Neth H, Chen M, Keim DA. Task-Based Visual Interactive Modeling: Decision Trees and Rule-Based Classifiers. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3307-3323. [PMID: 33439846 DOI: 10.1109/tvcg.2020.3045560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Visual analytics enables the coupling of machine learning models and humans in a tightly integrated workflow, addressing various analysis tasks. Each task poses distinct demands to analysts and decision-makers. In this survey, we focus on one canonical technique for rule-based classification, namely decision tree classifiers. We provide an overview of available visualizations for decision trees with a focus on how visualizations differ with respect to 16 tasks. Further, we investigate the types of visual designs employed, and the quality measures presented. We find that (i) interactive visual analytics systems for classifier development offer a variety of visual designs, (ii) utilization tasks are sparsely covered, (iii) beyond classifier development, node-link diagrams are omnipresent, (iv) even systems designed for machine learning experts rarely feature visual representations of quality measures other than accuracy. In conclusion, we see a potential for integrating algorithmic techniques, mathematical quality measures, and tailored interactive visualizations to enable human experts to utilize their knowledge more effectively.
Collapse
|
3
|
Ordonez A, Caicedo OM, Villota W, Rodriguez-Vivas A, da Fonseca NLS. Model-Based Reinforcement Learning with Automated Planning for Network Management. SENSORS (BASEL, SWITZERLAND) 2022; 22:6301. [PMID: 36016062 PMCID: PMC9416718 DOI: 10.3390/s22166301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 08/06/2022] [Accepted: 08/12/2022] [Indexed: 06/15/2023]
Abstract
Reinforcement Learning (RL) comes with the promise of automating network management. However, due to its trial-and-error learning approach, model-based RL (MBRL) is not applicable in some network management scenarios. This paper explores the potential of using Automated Planning (AP) to achieve this MBRL in the functional areas of network management. In addition, a comparison of several integration strategies of AP and RL is depicted. We also describe an architecture that realizes a cognitive management control loop by combining AP and RL. Our experiments evaluate on a simulated environment evidence that the combination proposed improves model-free RL but demonstrates lower performance than Deep RL regarding the reward and convergence time metrics. Nonetheless, AP-based MBRL is useful when the prediction model needs to be understood and when the high computational complexity of Deep RL can not be used.
Collapse
Affiliation(s)
| | | | - William Villota
- Institute of Computing, University of Campinas, Campinas 13083-852, Brazil
| | | | | |
Collapse
|
4
|
Xu J, Meng Y, Qiu K, Topatana W, Li S, Wei C, Chen T, Chen M, Ding Z, Niu G. Applications of Artificial Intelligence Based on Medical Imaging in Glioma: Current State and Future Challenges. Front Oncol 2022; 12:892056. [PMID: 35965542 PMCID: PMC9363668 DOI: 10.3389/fonc.2022.892056] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 06/22/2022] [Indexed: 12/24/2022] Open
Abstract
Glioma is one of the most fatal primary brain tumors, and it is well-known for its difficulty in diagnosis and management. Medical imaging techniques such as magnetic resonance imaging (MRI), positron emission tomography (PET), and spectral imaging can efficiently aid physicians in diagnosing, treating, and evaluating patients with gliomas. With the increasing clinical records and digital images, the application of artificial intelligence (AI) based on medical imaging has reduced the burden on physicians treating gliomas even further. This review will classify AI technologies and procedures used in medical imaging analysis. Additionally, we will discuss the applications of AI in glioma, including tumor segmentation and classification, prediction of genetic markers, and prediction of treatment response and prognosis, using MRI, PET, and spectral imaging. Despite the benefits of AI in clinical applications, several issues such as data management, incomprehension, safety, clinical efficacy evaluation, and ethical or legal considerations, remain to be solved. In the future, doctors and researchers should collaborate to solve these issues, with a particular emphasis on interdisciplinary teamwork.
Collapse
Affiliation(s)
- Jiaona Xu
- Hangzhou First People’s Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Yuting Meng
- Hangzhou First People’s Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Kefan Qiu
- Hangzhou First People’s Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Win Topatana
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Shijie Li
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Chao Wei
- Department of Neurology, Affiliated Ningbo First Hospital, Ningbo, China
| | - Tianwen Chen
- Department of Neurology, Affiliated Hangzhou First People’s Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Mingyu Chen
- Department of General Surgery, Sir Run-Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
- *Correspondence: Mingyu Chen, ; Zhongxiang Ding, ; Guozhong Niu,
| | - Zhongxiang Ding
- Department of Radiology, Affiliated Hangzhou First People’s Hospital, Zhejiang University School of Medicine, Hangzhou, China
- *Correspondence: Mingyu Chen, ; Zhongxiang Ding, ; Guozhong Niu,
| | - Guozhong Niu
- Department of Neurology, Affiliated Hangzhou First People’s Hospital, Zhejiang University School of Medicine, Hangzhou, China
- *Correspondence: Mingyu Chen, ; Zhongxiang Ding, ; Guozhong Niu,
| |
Collapse
|
5
|
Jia S, Li Z, Chen N, Zhang J. Towards Visual Explainable Active Learning for Zero-Shot Classification. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:791-801. [PMID: 34587036 DOI: 10.1109/tvcg.2021.3114793] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Zero-shot classification is a promising paradigm to solve an applicable problem when the training classes and test classes are disjoint. Achieving this usually needs experts to externalize their domain knowledge by manually specifying a class-attribute matrix to define which classes have which attributes. Designing a suitable class-attribute matrix is the key to the subsequent procedure, but this design process is tedious and trial-and-error with no guidance. This paper proposes a visual explainable active learning approach with its design and implementation called semantic navigator to solve the above problems. This approach promotes human-AI teaming with four actions (ask, explain, recommend, respond) in each interaction loop. The machine asks contrastive questions to guide humans in the thinking process of attributes. A novel visualization called semantic map explains the current status of the machine. Therefore analysts can better understand why the machine misclassifies objects. Moreover, the machine recommends the labels of classes for each attribute to ease the labeling burden. Finally, humans can steer the model by modifying the labels interactively, and the machine adjusts its recommendations. The visual explainable active learning approach improves humans' efficiency of building zero-shot classification models interactively, compared with the method without guidance. We justify our results with user studies using the standard benchmarks for zero-shot classification.
Collapse
|
6
|
Knittel J, Lalama A, Koch S, Ertl T. Visual Neural Decomposition to Explain Multivariate Data Sets. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1374-1384. [PMID: 33048724 DOI: 10.1109/tvcg.2020.3030420] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Investigating relationships between variables in multi-dimensional data sets is a common task for data analysts and engineers. More specifically, it is often valuable to understand which ranges of which input variables lead to particular values of a given target variable. Unfortunately, with an increasing number of independent variables, this process may become cumbersome and time-consuming due to the many possible combinations that have to be explored. In this paper, we propose a novel approach to visualize correlations between input variables and a target output variable that scales to hundreds of variables. We developed a visual model based on neural networks that can be explored in a guided way to help analysts find and understand such correlations. First, we train a neural network to predict the target from the input variables. Then, we visualize the inner workings of the resulting model to help understand relations within the data set. We further introduce a new regularization term for the backpropagation algorithm that encourages the neural network to learn representations that are easier to interpret visually. We apply our method to artificial and real-world data sets to show its utility.
Collapse
|