1
|
Zehra A, Naik PA, Hasan A, Farman M, Nisar KS, Chaudhry F, Huang Z. Physiological and chaos effect on dynamics of neurological disorder with memory effect of fractional operator: A mathematical study. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108190. [PMID: 38688140 DOI: 10.1016/j.cmpb.2024.108190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2024] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 05/02/2024]
Abstract
BACKGROUND AND OBJECTIVE To study the dynamical system, it is necessary to formulate the mathematical model to understand the dynamics of various diseases that are spread worldwide. The main objective of our work is to examine neurological disorders by early detection and treatment by taking asymptomatic. The central nervous system (CNS) is impacted by the prevalent neurological condition known as multiple sclerosis (MS), which can result in lesions that spread across time and place. It is widely acknowledged that multiple sclerosis (MS) is an unpredictable disease that can cause lifelong damage to the brain, spinal cord, and optic nerves. The use of integral operators and fractional order (FO) derivatives in mathematical models has become popular in the field of epidemiology. METHOD The model consists of segments of healthy or barian brain cells, infected brain cells, and damaged brain cells as a result of immunological or viral effectors with novel fractal fractional operator in sight Mittag Leffler function. The stability analysis, positivity, boundedness, existence, and uniqueness are treated for a proposed model with novel fractional operators. RESULTS Model is verified the local and global with the Lyapunov function. Chaos Control will use the regulate for linear responses approach to bring the system to stabilize according to its points of equilibrium so that solutions are bounded in the feasible domain. To ensure the existence and uniqueness of the solutions to the suggested model, it makes use of Banach's fixed point and the Leray Schauder nonlinear alternative theorem. For numerical simulation and results the steps Lagrange interpolation method at different fractional order values and the outcomes are compared with those obtained using the well-known FFM method. CONCLUSION Overall, by offering a mathematical model that can be used to replicate and examine the behavior of disease models, this research advances our understanding of the course and recurrence of disease. Such type of investigation will be useful to investigate the spread of disease as well as helpful in developing control strategies from our justified outcomes.
Collapse
Affiliation(s)
- Anum Zehra
- Department of Mathematics, The Women University Multan, Multan, Pakistan
| | - Parvaiz Ahmad Naik
- Department of Mathematics and Computer Science, Youjiang Medical University for Nationalities, Baise 533000, Guangxi, China.
| | - Ali Hasan
- Department of Mathematics and Statistics, The University of Lahore, 54100 Lahore, Pakistan
| | - Muhammad Farman
- Faculty of Arts and Sciences, Department of Mathematics, Near East University, Northern Cyprus, Turkey; Department of Computer Science and Mathematics, Lebanese American University, 1102-2801, Beirut, Lebanon
| | - Kottakkaran Sooppy Nisar
- Department of Mathematics, College of Science and Humanities , Al Kharj, 11942, Prince Sattam bin Abdulaziz University, Saudi Arabia
| | - Faryal Chaudhry
- Department of Mathematics and Statistics, The University of Lahore, 54100 Lahore, Pakistan
| | - Zhengxin Huang
- Department of Mathematics and Computer Science, Youjiang Medical University for Nationalities, Baise 533000, Guangxi, China
| |
Collapse
|
2
|
Pham MD, D’Angiulli A, Dehnavi MM, Chhabra R. From Brain Models to Robotic Embodied Cognition: How Does Biological Plausibility Inform Neuromorphic Systems? Brain Sci 2023; 13:1316. [PMID: 37759917 PMCID: PMC10526461 DOI: 10.3390/brainsci13091316] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 09/05/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
We examine the challenging "marriage" between computational efficiency and biological plausibility-A crucial node in the domain of spiking neural networks at the intersection of neuroscience, artificial intelligence, and robotics. Through a transdisciplinary review, we retrace the historical and most recent constraining influences that these parallel fields have exerted on descriptive analysis of the brain, construction of predictive brain models, and ultimately, the embodiment of neural networks in an enacted robotic agent. We study models of Spiking Neural Networks (SNN) as the central means enabling autonomous and intelligent behaviors in biological systems. We then provide a critical comparison of the available hardware and software to emulate SNNs for investigating biological entities and their application on artificial systems. Neuromorphics is identified as a promising tool to embody SNNs in real physical systems and different neuromorphic chips are compared. The concepts required for describing SNNs are dissected and contextualized in the new no man's land between cognitive neuroscience and artificial intelligence. Although there are recent reviews on the application of neuromorphic computing in various modules of the guidance, navigation, and control of robotic systems, the focus of this paper is more on closing the cognition loop in SNN-embodied robotics. We argue that biologically viable spiking neuronal models used for electroencephalogram signals are excellent candidates for furthering our knowledge of the explainability of SNNs. We complete our survey by reviewing different robotic modules that can benefit from neuromorphic hardware, e.g., perception (with a focus on vision), localization, and cognition. We conclude that the tradeoff between symbolic computational power and biological plausibility of hardware can be best addressed by neuromorphics, whose presence in neurorobotics provides an accountable empirical testbench for investigating synthetic and natural embodied cognition. We argue this is where both theoretical and empirical future work should converge in multidisciplinary efforts involving neuroscience, artificial intelligence, and robotics.
Collapse
Affiliation(s)
- Martin Do Pham
- Department of Computer Science, University of Toronto, Toronto, ON M5S 1A1, Canada; (M.D.P.); (M.M.D.)
| | - Amedeo D’Angiulli
- Department of Neuroscience, Carleton University, Ottawa, ON K1S 5B6, Canada;
| | - Maryam Mehri Dehnavi
- Department of Computer Science, University of Toronto, Toronto, ON M5S 1A1, Canada; (M.D.P.); (M.M.D.)
| | - Robin Chhabra
- Department of Mechanical and Aerospace Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada
| |
Collapse
|
3
|
Fu Q. Motion perception based on ON/OFF channels: A survey. Neural Netw 2023; 165:1-18. [PMID: 37263088 DOI: 10.1016/j.neunet.2023.05.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 04/02/2023] [Accepted: 05/17/2023] [Indexed: 06/03/2023]
Abstract
Motion perception is an essential ability for animals and artificially intelligent systems interacting effectively, safely with surrounding objects and environments. Biological visual systems, that have naturally evolved over hundreds-million years, are quite efficient and robust for motion perception, whereas artificial vision systems are far from such capability. This paper argues that the gap can be significantly reduced by formulation of ON/OFF channels in motion perception models encoding luminance increment (ON) and decrement (OFF) responses within receptive field, separately. Such signal-bifurcating structure has been found in neural systems of many animal species articulating early motion is split and processed in segregated pathways. However, the corresponding biological substrates, and the necessity for artificial vision systems have never been elucidated together, leaving concerns on uniqueness and advantages of ON/OFF channels upon building dynamic vision systems to address real world challenges. This paper highlights the importance of ON/OFF channels in motion perception through surveying current progress covering both neuroscience and computationally modelling works with applications. Compared to related literature, this paper for the first time provides insights into implementation of different selectivity to directional motion of looming, translating, and small-sized target movement based on ON/OFF channels in keeping with soundness and robustness of biological principles. Existing challenges and future trends of such bio-plausible computational structure for visual perception in connection with hotspots of machine learning, advanced vision sensors like event-driven camera finally are discussed.
Collapse
Affiliation(s)
- Qinbing Fu
- Machine Life and Intelligence Research Centre, School of Mathematics and Information Science, Guangzhou University, Guangzhou, 510006, China.
| |
Collapse
|
4
|
Ramdya P, Ijspeert AJ. The neuromechanics of animal locomotion: From biology to robotics and back. Sci Robot 2023; 8:eadg0279. [PMID: 37256966 DOI: 10.1126/scirobotics.adg0279] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 05/05/2023] [Indexed: 06/02/2023]
Abstract
Robotics and neuroscience are sister disciplines that both aim to understand how agile, efficient, and robust locomotion can be achieved in autonomous agents. Robotics has already benefitted from neuromechanical principles discovered by investigating animals. These include the use of high-level commands to control low-level central pattern generator-like controllers, which, in turn, are informed by sensory feedback. Reciprocally, neuroscience has benefited from tools and intuitions in robotics to reveal how embodiment, physical interactions with the environment, and sensory feedback help sculpt animal behavior. We illustrate and discuss exemplar studies of this dialog between robotics and neuroscience. We also reveal how the increasing biorealism of simulations and robots is driving these two disciplines together, forging an integrative science of autonomous behavioral control with many exciting future opportunities.
Collapse
Affiliation(s)
- Pavan Ramdya
- Neuroengineering Laboratory, Brain Mind Institute and Institute of Bioengineering, EPFL, Lausanne, Switzerland
| | - Auke Jan Ijspeert
- Biorobotics Laboratory, Institute of Bioengineering, EPFL, Lausanne, Switzerland
| |
Collapse
|
5
|
Khan MS, Olds JL. When neuro-robots go wrong: A review. Front Neurorobot 2023; 17:1112839. [PMID: 36819005 PMCID: PMC9935594 DOI: 10.3389/fnbot.2023.1112839] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 01/19/2023] [Indexed: 02/05/2023] Open
Abstract
Neuro-robots are a class of autonomous machines that, in their architecture, mimic aspects of the human brain and cognition. As such, they represent unique artifacts created by humans based on human understanding of healthy human brains. European Union's Convention on Roboethics 2025 states that the design of all robots (including neuro-robots) must include provisions for the complete traceability of the robots' actions, analogous to an aircraft's flight data recorder. At the same time, one can anticipate rising instances of neuro-robotic failure, as they operate on imperfect data in real environments, and the underlying AI behind such neuro-robots has yet to achieve explainability. This paper reviews the trajectory of the technology used in neuro-robots and accompanying failures. The failures demand an explanation. While drawing on existing explainable AI research, we argue explainability in AI limits the same in neuro-robots. In order to make robots more explainable, we suggest potential pathways for future research.
Collapse
|
6
|
Godin-Dubois K, Cussat-Blanc S, Duthen Y. Explaining the Neuroevolution of Fighting Creatures Through Virtual fMRI. ARTIFICIAL LIFE 2023; 29:66-93. [PMID: 36173656 DOI: 10.1162/artl_a_00389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
While interest in artificial neural networks (ANNs) has been renewed by the ubiquitous use of deep learning to solve high-dimensional problems, we are still far from general artificial intelligence. In this article, we address the problem of emergent cognitive capabilities and, more crucially, of their detection, by relying on co-evolving creatures with mutable morphology and neural structure. The former is implemented via both static and mobile structures whose shapes are controlled by cubic splines. The latter uses ESHyperNEAT to discover not only appropriate combinations of connections and weights but also to extrapolate hidden neuron distribution. The creatures integrate low-level perceptions (touch/pain proprioceptors, retina-based vision, frequency-based hearing) to inform their actions. By discovering a functional mapping between individual neurons and specific stimuli, we extract a high-level module-based abstraction of a creature's brain. This drastically simplifies the discovery of relationships between naturally occurring events and their neural implementation. Applying this methodology to creatures resulting from solitary and tag-team co-evolution showed remarkable dynamics such as range-finding and structured communication. Such discovery was made possible by the abstraction provided by the modular ANN which allowed groups of neurons to be viewed as functionally enclosed entities.
Collapse
Affiliation(s)
| | - Sylvain Cussat-Blanc
- CNRS
- University of Toulouse, IRIT
- Artificial and Natural Intelligence Toulouse Institute
| | | |
Collapse
|
7
|
Linking global top-down views to first-person views in the brain. Proc Natl Acad Sci U S A 2022; 119:e2202024119. [PMID: 36322732 PMCID: PMC9659407 DOI: 10.1073/pnas.2202024119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
Abstract
Humans and other animals have a remarkable capacity to translate their position from one spatial frame of reference to another. The ability to seamlessly move between top-down and first-person views is important for navigation, memory formation, and other cognitive tasks. Evidence suggests that the medial temporal lobe and other cortical regions contribute to this function. To understand how a neural system might carry out these computations, we used variational autoencoders (VAEs) to reconstruct the first-person view from the top-down view of a robot simulation, and vice versa. Many latent variables in the VAEs had similar responses to those seen in neuron recordings, including location-specific activity, head direction tuning, and encoding of distance to local objects. Place-specific responses were prominent when reconstructing a first-person view from a top-down view, but head direction-specific responses were prominent when reconstructing a top-down view from a first-person view. In both cases, the model could recover from perturbations without retraining, but rather through remapping. These results could advance our understanding of how brain regions support viewpoint linkages and transformations.
Collapse
|
8
|
Abnormality Detection and Failure Prediction Using Explainable Bayesian Deep Learning: Methodology and Case Study with Industrial Data. MATHEMATICS 2022. [DOI: 10.3390/math10040554] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Mistrust, amplified by numerous artificial intelligence (AI) related incidents, is an issue that has caused the energy and industrial sectors to be amongst the slowest adopter of AI methods. Central to this issue is the black-box problem of AI, which impedes investments and is fast becoming a legal hazard for users. Explainable AI (XAI) is a recent paradigm to tackle such an issue. Being the backbone of the industry, the prognostic and health management (PHM) domain has recently been introduced into XAI. However, many deficiencies, particularly the lack of explanation assessment methods and uncertainty quantification, plague this young domain. In the present paper, we elaborate a framework on explainable anomaly detection and failure prognostic employing a Bayesian deep learning model and Shapley additive explanations (SHAP) to generate local and global explanations from the PHM tasks. An uncertainty measure of the Bayesian model is utilized as a marker for anomalies and expands the prognostic explanation scope to include the model’s confidence. In addition, the global explanation is used to improve prognostic performance, an aspect neglected from the handful of studies on PHM-XAI. The quality of the explanation is examined employing local accuracy and consistency properties. The elaborated framework is tested on real-world gas turbine anomalies and synthetic turbofan failure prediction data. Seven out of eight of the tested anomalies were successfully identified. Additionally, the prognostic outcome showed a 19% improvement in statistical terms and achieved the highest prognostic score amongst best published results on the topic.
Collapse
|
9
|
Overview of Explainable Artificial Intelligence for Prognostic and Health Management of Industrial Assets Based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses. SENSORS 2021; 21:s21238020. [PMID: 34884024 PMCID: PMC8659640 DOI: 10.3390/s21238020] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 11/19/2021] [Accepted: 11/23/2021] [Indexed: 12/25/2022]
Abstract
Surveys on explainable artificial intelligence (XAI) are related to biology, clinical trials, fintech management, medicine, neurorobotics, and psychology, among others. Prognostics and health management (PHM) is the discipline that links the studies of failure mechanisms to system lifecycle management. There is a need, which is still absent, to produce an analytical compilation of PHM-XAI works. In this paper, we use preferred reporting items for systematic reviews and meta-analyses (PRISMA) to present a state of the art on XAI applied to PHM of industrial assets. This work provides an overview of the trend of XAI in PHM and answers the question of accuracy versus explainability, considering the extent of human involvement, explanation assessment, and uncertainty quantification in this topic. Research articles associated with the subject, since 2015 to 2021, were selected from five databases following the PRISMA methodology, several of them related to sensors. The data were extracted from selected articles and examined obtaining diverse findings that were synthesized as follows. First, while the discipline is still young, the analysis indicates a growing acceptance of XAI in PHM. Second, XAI offers dual advantages, where it is assimilated as a tool to execute PHM tasks and explain diagnostic and anomaly detection activities, implying a real need for XAI in PHM. Third, the review shows that PHM-XAI papers provide interesting results, suggesting that the PHM performance is unaffected by the XAI. Fourth, human role, evaluation metrics, and uncertainty management are areas requiring further attention by the PHM community. Adequate assessment metrics to cater to PHM needs are requested. Finally, most case studies featured in the considered articles are based on real industrial data, and some of them are related to sensors, showing that the available PHM-XAI blends solve real-world challenges, increasing the confidence in the artificial intelligence models' adoption in the industry.
Collapse
|
10
|
Belle V, Papantonis I. Principles and Practice of Explainable Machine Learning. Front Big Data 2021; 4:688969. [PMID: 34278297 PMCID: PMC8281957 DOI: 10.3389/fdata.2021.688969] [Citation(s) in RCA: 81] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 05/26/2021] [Indexed: 12/05/2022] Open
Abstract
Artificial intelligence (AI) provides many opportunities to improve private and public life. Discovering patterns and structures in large troves of data in an automated manner is a core component of data science, and currently drives applications in diverse areas such as computational biology, law and finance. However, such a highly positive impact is coupled with a significant challenge: how do we understand the decisions suggested by these systems in order that we can trust them? In this report, we focus specifically on data-driven methods-machine learning (ML) and pattern recognition models in particular-so as to survey and distill the results and observations from the literature. The purpose of this report can be especially appreciated by noting that ML models are increasingly deployed in a wide range of businesses. However, with the increasing prevalence and complexity of methods, business stakeholders in the very least have a growing number of concerns about the drawbacks of models, data-specific biases, and so on. Analogously, data science practitioners are often not aware about approaches emerging from the academic literature or may struggle to appreciate the differences between different methods, so end up using industry standards such as SHAP. Here, we have undertaken a survey to help industry practitioners (but also data scientists more broadly) understand the field of explainable machine learning better and apply the right tools. Our latter sections build a narrative around a putative data scientist, and discuss how she might go about explaining her models by asking the right questions. From an organization viewpoint, after motivating the area broadly, we discuss the main developments, including the principles that allow us to study transparent models vs. opaque models, as well as model-specific or model-agnostic post-hoc explainability approaches. We also briefly reflect on deep learning models, and conclude with a discussion about future research directions.
Collapse
Affiliation(s)
- Vaishak Belle
- School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
- Alan Turing Institute, London, United Kingdom
| | - Ioannis Papantonis
- School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|