1
|
Augenstein TE, Nagalla D, Mohacey A, Cubillos LH, Lee MH, Ranganathan R, Krishnan C. A novel virtual robotic platform for controlling six degrees of freedom assistive devices with body-machine interfaces. Comput Biol Med 2024; 178:108778. [PMID: 38925086 DOI: 10.1016/j.compbiomed.2024.108778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 05/14/2024] [Accepted: 06/15/2024] [Indexed: 06/28/2024]
Abstract
Body-machine interfaces (BoMIs)-systems that control assistive devices (e.g., a robotic manipulator) with a person's movements-offer a robust and non-invasive alternative to brain-machine interfaces for individuals with neurological injuries. However, commercially-available assistive devices offer more degrees of freedom (DOFs) than can be efficiently controlled with a user's residual motor function. Therefore, BoMIs often rely on nonintuitive mappings between body and device movements. Learning these mappings requires considerable practice time in a lab/clinic, which can be challenging. Virtual environments can potentially address this challenge, but there are limited options for high-DOF assistive devices, and it is unclear if learning with a virtual device is similar to learning with its physical counterpart. We developed a novel virtual robotic platform that replicated a commercially-available 6-DOF robotic manipulator. Participants controlled the physical and virtual robots using four wireless inertial measurement units (IMUs) fixed to the upper torso. Forty-three neurologically unimpaired adults practiced a target-matching task using either the physical (sample size n = 25) or virtual device (sample size n = 18) involving pre-, mid-, and post-tests separated by four training blocks. We found that both groups made similar improvements from pre-test in movement time at mid-test (Δvirtual: 9.9 ± 9.5 s; Δphysical: 11.1 ± 9.9 s) and post-test (Δvirtual: 11.1 ± 9.1 s; Δphysical: 11.8 ± 10.5 s) and in path length at mid-test (Δvirtual: 6.1 ± 6.3 m/m; Δphysical: 3.3 ± 3.5 m/m) and post-test (Δvirtual: 6.6 ± 6.2 m/m; Δphysical: 3.5 ± 4.0 m/m). Our results indicate the feasibility of using virtual environments for learning to control assistive devices. Future work should determine how these findings generalize to clinical populations.
Collapse
Affiliation(s)
- Thomas E Augenstein
- Robotics Department, University of Michigan, Ann Arbor, MI, USA; NeuRRo Lab, Department of Physical Medicine and Rehabilitation, University of Michigan, Ann Arbor, MI, USA
| | - Deepak Nagalla
- Robotics Department, University of Michigan, Ann Arbor, MI, USA; NeuRRo Lab, Department of Physical Medicine and Rehabilitation, University of Michigan, Ann Arbor, MI, USA
| | - Alexander Mohacey
- Robotics Department, University of Michigan, Ann Arbor, MI, USA; NeuRRo Lab, Department of Physical Medicine and Rehabilitation, University of Michigan, Ann Arbor, MI, USA
| | - Luis H Cubillos
- Robotics Department, University of Michigan, Ann Arbor, MI, USA; NeuRRo Lab, Department of Physical Medicine and Rehabilitation, University of Michigan, Ann Arbor, MI, USA
| | - Mei-Hua Lee
- Department of Kinesiology, Michigan State University, Lansing, MI, USA
| | - Rajiv Ranganathan
- Department of Kinesiology, Michigan State University, Lansing, MI, USA; Department of Mechanical Engineering, Michigan State University, Lansing, MI, USA
| | - Chandramouli Krishnan
- Robotics Department, University of Michigan, Ann Arbor, MI, USA; NeuRRo Lab, Department of Physical Medicine and Rehabilitation, University of Michigan, Ann Arbor, MI, USA; Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA; Department of Kinesiology, University of Michigan, Ann Arbor, MI, USA; Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI, USA; Department of Physical Therapy, University of Michigan, Flint, MI, USA.
| |
Collapse
|
2
|
Wolf M, Rupp R, Schwarz A. Decoding of unimanual and bimanual reach-and-grasp actions from EMG and IMU signals in persons with cervical spinal cord injury. J Neural Eng 2024; 21:026042. [PMID: 38471169 DOI: 10.1088/1741-2552/ad331f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 03/12/2024] [Indexed: 03/14/2024]
Abstract
Objective. Chronic motor impairments of arms and hands as the consequence of a cervical spinal cord injury (SCI) have a tremendous impact on activities of daily life. A considerable number of people however retain minimal voluntary motor control in the paralyzed parts of the upper limbs that are measurable by electromyography (EMG) and inertial measurement units (IMUs). An integration into human-machine interfaces (HMIs) holds promise for reliable grasp intent detection and intuitive assistive device control.Approach. We used a multimodal HMI incorporating EMG and IMU data to decode reach-and-grasp movements of groups of persons with cervical SCI (n = 4) and without (control, n = 13). A post-hoc evaluation of control group data aimed to identify optimal parameters for online, co-adaptive closed-loop HMI sessions with persons with cervical SCI. We compared the performance of real-time, Random Forest-based movement versus rest (2 classes) and grasp type predictors (3 classes) with respect to their co-adaptation and evaluated the underlying feature importance maps.Main results. Our multimodal approach enabled grasp decoding significantly better than EMG or IMU data alone (p<0.05). We found the 0.25 s directly prior to the first touch of an object to hold the most discriminative information. Our HMIs correctly predicted 79.3 ± STD 7.4 (102.7 ± STD 2.3 control group) out of 105 trials with grand average movement vs. rest prediction accuracies above 99.64% (100% sensitivity) and grasp prediction accuracies of 75.39 ± STD 13.77% (97.66 ± STD 5.48% control group). Co-adaption led to higher prediction accuracies with time, and we could identify adaptions in feature importances unique to each participant with cervical SCI.Significance. Our findings foster the development of multimodal and adaptive HMIs to allow persons with cervical SCI the intuitive control of assistive devices to improve personal independence.
Collapse
Affiliation(s)
- Marvin Wolf
- Spinal Cord Injury Center, Heidelberg University Hospital, Schlierbacher Landstraße 200a, Heidelberg 69118, Baden-Württenberg, Germany
| | - Rüdiger Rupp
- Spinal Cord Injury Center, Heidelberg University Hospital, Schlierbacher Landstraße 200a, Heidelberg 69118, Baden-Württenberg, Germany
| | - Andreas Schwarz
- Spinal Cord Injury Center, Heidelberg University Hospital, Schlierbacher Landstraße 200a, Heidelberg 69118, Baden-Württenberg, Germany
| |
Collapse
|
3
|
Rajeswaran P, Payeur A, Lajoie G, Orsborn AL. Assistive sensory-motor perturbations influence learned neural representations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.20.585972. [PMID: 38562772 PMCID: PMC10983972 DOI: 10.1101/2024.03.20.585972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Task errors are used to learn and refine motor skills. We investigated how task assistance influences learned neural representations using Brain-Computer Interfaces (BCIs), which map neural activity into movement via a decoder. We analyzed motor cortex activity as monkeys practiced BCI with a decoder that adapted to improve or maintain performance over days. Population dimensionality remained constant or increased with learning, counter to trends with non-adaptive BCIs. Yet, over time, task information was contained in a smaller subset of neurons or population modes. Moreover, task information was ultimately stored in neural modes that occupied a small fraction of the population variance. An artificial neural network model suggests the adaptive decoders contribute to forming these compact neural representations. Our findings show that assistive decoders manipulate error information used for long-term learning computations, like credit assignment, which informs our understanding of motor learning and has implications for designing real-world BCIs.
Collapse
Affiliation(s)
| | - Alexandre Payeur
- Université de Montreál, Department of Mathematics and Statistics, Montreál (QC), Canada, H3C 3J7
- Mila - Québec Artificial Intelligence Institute, Montreál (QC), Canada, H2S 3H1
| | - Guillaume Lajoie
- Université de Montreál, Department of Mathematics and Statistics, Montreál (QC), Canada, H3C 3J7
- Mila - Québec Artificial Intelligence Institute, Montreál (QC), Canada, H2S 3H1
| | - Amy L. Orsborn
- University of Washington, Bioengineering, Seattle, 98115, USA
- University of Washington, Electrical and Computer Engineering, Seattle, 98115, USA
- Washington National Primate Research Center, Seattle, Washington, 98115, USA
| |
Collapse
|
4
|
Cashaback JGA, Allen JL, Chou AHY, Lin DJ, Price MA, Secerovic NK, Song S, Zhang H, Miller HL. NSF DARE-transforming modeling in neurorehabilitation: a patient-in-the-loop framework. J Neuroeng Rehabil 2024; 21:23. [PMID: 38347597 PMCID: PMC10863253 DOI: 10.1186/s12984-024-01318-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 01/25/2024] [Indexed: 02/15/2024] Open
Abstract
In 2023, the National Science Foundation (NSF) and the National Institute of Health (NIH) brought together engineers, scientists, and clinicians by sponsoring a conference on computational modelling in neurorehabiilitation. To facilitate multidisciplinary collaborations and improve patient care, in this perspective piece we identify where and how computational modelling can support neurorehabilitation. To address the where, we developed a patient-in-the-loop framework that uses multiple and/or continual measurements to update diagnostic and treatment model parameters, treatment type, and treatment prescription, with the goal of maximizing clinically-relevant functional outcomes. This patient-in-the-loop framework has several key features: (i) it includes diagnostic and treatment models, (ii) it is clinically-grounded with the International Classification of Functioning, Disability and Health (ICF) and patient involvement, (iii) it uses multiple or continual data measurements over time, and (iv) it is applicable to a range of neurological and neurodevelopmental conditions. To address the how, we identify state-of-the-art and highlight promising avenues of future research across the realms of sensorimotor adaptation, neuroplasticity, musculoskeletal, and sensory & pain computational modelling. We also discuss both the importance of and how to perform model validation, as well as challenges to overcome when implementing computational models within a clinical setting. The patient-in-the-loop approach offers a unifying framework to guide multidisciplinary collaboration between computational and clinical stakeholders in the field of neurorehabilitation.
Collapse
Affiliation(s)
- Joshua G A Cashaback
- Biomedical Engineering, Mechanical Engineering, Kinesiology and Applied Physiology, Biome chanics and Movement Science Program, Interdisciplinary Neuroscience Graduate Program, University of Delaware, 540 S College Ave, Newark, DE, 19711, USA.
| | - Jessica L Allen
- Department of Mechanical Engineering, University of Florida, Gainesville, USA
| | | | - David J Lin
- Division of Neurocritical Care and Stroke Service, Department of Neurology, Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Harvard Medical School, Boston, USA
- Department of Veterans Affairs, Center for Neurorestoration and Neurotechnology, Rehabilitation Research and Development Service, Providence, USA
| | - Mark A Price
- Department of Mechanical and Industrial Engineering, Department of Kinesiology, University of Massachusetts Amherst, Amherst, USA
| | - Natalija K Secerovic
- School of Electrical Engineering, The Mihajlo Pupin Institute, University of Belgrade, Belgrade, Serbia
- Laboratory for Neuroengineering, Institute for Robotics and Intelligent Systems ETH Zürich, Zurich, Switzerland
| | - Seungmoon Song
- Mechanical and Industrial Engineering, Northeastern University, Boston, USA
| | - Haohan Zhang
- Department of Mechanical Engineering, University of Utah, Salt Lake City, USA
| | - Haylie L Miller
- School of Kinesiology, University of Michigan, 830 N University Ave, Ann Arbor, MI, 48109, USA.
| |
Collapse
|
5
|
Lee JM, Gebrekristos T, De Santis D, Nejati-Javaremi M, Gopinath D, Parikh B, Mussa-Ivaldi FA, Argall BD. An Exploratory Multi-Session Study of Learning High-Dimensional Body-Machine Interfacing for Assistive Robot Control. IEEE Int Conf Rehabil Robot 2023; 2023:1-6. [PMID: 37941183 PMCID: PMC11059238 DOI: 10.1109/icorr58425.2023.10304745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2023]
Abstract
Individuals who suffer from severe paralysis often lose the capacity to perform fundamental body movements and everyday activities. Empowering these individuals with the ability to operate robotic arms, in high degrees-of-freedom (DoFs), can help to maximize both functional utility and independence. However, robot teleoperation in high DoFs currently lacks accessibility due to the challenge in capturing high-dimensional control signals from the human, especially in the face of motor impairments. Body-machine interfacing is a viable option that offers the necessary high-dimensional motion capture, and it moreover is noninvasive, affordable, and promotes movement and motor recovery. Nevertheless, to what extent body-machine interfacing is able to scale to high-DoF robot control, and whether it is feasible for humans to learn, remains an open question. In this exploratory multi-session study, we demonstrate the feasibility of human learning to operate a body-machine interface to control a complex, assistive robotic arm. We use a sensor net of four inertial measurement unit sensors, bilaterally placed on the scapulae and humeri. Ten uninjured participants are familiarized, trained, and evaluated in reaching and Activities of Daily Living tasks, using the body- machine interface. Our results suggest the manner of control space mapping (joint-space control versus task-space control), from interface to robot, plays a critical role in the evolution of human learning. Though joint-space control shows to be more intuitive initially, task-space control is found to have a greater capacity for longer-term improvement and learning.
Collapse
Affiliation(s)
- Jongmin M. Lee
- Northwestern University, Evanston, Illinois, USA
- Shirley Ryan AbilityLab, Chicago, Illinois, USA
| | - Temesgen Gebrekristos
- Northwestern University, Evanston, Illinois, USA
- Shirley Ryan AbilityLab, Chicago, Illinois, USA
| | | | - Mahdieh Nejati-Javaremi
- Northwestern University, Evanston, Illinois, USA
- Shirley Ryan AbilityLab, Chicago, Illinois, USA
| | - Deepak Gopinath
- Northwestern University, Evanston, Illinois, USA
- Shirley Ryan AbilityLab, Chicago, Illinois, USA
| | - Biraj Parikh
- Northwestern University, Evanston, Illinois, USA
| | | | - Brenna D. Argall
- Northwestern University, Evanston, Illinois, USA
- Shirley Ryan AbilityLab, Chicago, Illinois, USA
| |
Collapse
|
6
|
Wu S, Qian C, Shen X, Zhang X, Huang Y, Chen S, Wang Y. Spike Prediction on Primary Motor Cortex from Medial Prefrontal Cortex during Task Learning. J Neural Eng 2022; 19. [PMID: 35839739 DOI: 10.1088/1741-2552/ac8180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 07/15/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVES Brain-machine interfaces (BMIs) aim to help people with motor disabilities by interpreting brain signals into motor intentions using advanced signal processing methods. Currently, BMI users require intensive training to perform a pre-defined task, not to mention learning a new task. Thus, it is essential to understand neural information pathways among the cortical areas in task learning to provide principles for designing BMIs with learning abilities. We propose to investigate the relationship between the medial prefrontal cortex (mPFC) and primary motor cortex (M1), which are actively involved in motor control and task learning, and show how information is conveyed in spikes between the two regions on a single-trial basis by computational models. APPROACH We are interested in modeling the functional relationship between mPFC and M1 activities during task learning. Six Sprague Dawley rats were trained to learn a new behavioral task. Neural spike data was recorded from mPFC and M1 during learning. We then implement the generalized linear model, the second-order generalized Laguerre-Volterra model, and the staged point-process model to predict M1 spikes from mPFC spikes across multiple days during task learning. The prediction performance is compared across different models or learning stages to reveal the relationship between mPFC and M1 spike activities. MAIN RESULTS We find that M1 neural spikes can be well predicted from mPFC spikes on the single-trial level, which indicates a highly correlated relationship between mPFC and M1 activities during task learning. By comparing the performance across models, we find that models with higher nonlinear capacity perform significantly better than linear models. This indicates that predicting M1 activity from mPFC activity requires the model to consider higher-order nonlinear interactions beyond pairwise interactions. We also find that the correlation coefficient between the mPFC and M1 spikes increases during task learning. The spike prediction models perform the best when the subjects become well trained on the new task compared with the early and middle stages. The results suggest that the co-activation between mPFC and M1 activities evolves during task learning, and becomes stronger as subjects become well trained. SIGNIFICANCE This study demonstrates that the dynamic patterns of M1 spikes can be predicted from mPFC spikes during task learning, and this will further help in the design of adaptive BMI decoders for task learning.
Collapse
Affiliation(s)
- Shenghui Wu
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, HONG KONG
| | - Cunle Qian
- College of Computer Science, Zhejiang University, Hangzhou, Hangzhou, Zhejiang, 310027, CHINA
| | - Xiang Shen
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong, HONG KONG
| | - Xiang Zhang
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong, HONG KONG
| | - Yifan Huang
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, HONG KONG
| | - Shuhang Chen
- Department of Chemical and Biological Engineering, Hong Kong University of Science and Technology Department of Electronic and Computer Engineering, Clear Water Bay, Kowloon, HONG KONG
| | - Yiwen Wang
- Department of Chemical and Biological Engineering, Hong Kong University of Science and Technology Department of Electronic and Computer Engineering, Clear Water Bay, Kowloon, HONG KONG
| |
Collapse
|