1
|
Xu H, Fang Y, Chou CA, Fard N, Luo L. A reinforcement learning-based optimal control approach for managing an elective surgery backlog after pandemic disruption. Health Care Manag Sci 2023; 26:430-446. [PMID: 37084163 PMCID: PMC10119544 DOI: 10.1007/s10729-023-09636-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 03/14/2023] [Indexed: 04/22/2023]
Abstract
Contagious disease pandemics, such as COVID-19, can cause hospitals around the world to delay nonemergent elective surgeries, which results in a large surgery backlog. To develop an operational solution for providing patients timely surgical care with limited health care resources, this study proposes a stochastic control process-based method that helps hospitals make operational recovery plans to clear their surgery backlog and restore surgical activity safely. The elective surgery backlog recovery process is modeled by a general discrete-time queueing network system, which is formulated by a Markov decision process. A scheduling optimization algorithm based on the piecewise decaying [Formula: see text]-greedy reinforcement learning algorithm is proposed to make dynamic daily surgery scheduling plans considering newly arrived patients, waiting time and clinical urgency. The proposed method is tested through a set of simulated dataset, and implemented on an elective surgery backlog that built up in one large general hospital in China after the outbreak of COVID-19. The results show that, compared with the current policy, the proposed method can effectively and rapidly clear the surgery backlog caused by a pandemic while ensuring that all patients receive timely surgical care. These results encourage the wider adoption of the proposed method to manage surgery scheduling during all phases of a public health crisis.
Collapse
Affiliation(s)
- Huyang Xu
- College of Management Science, Chengdu University of Technology, Chengdu, Sichuan, China
| | - Yuanchen Fang
- Department of Industrial Engineering and Management, Business School, Sichuan University, Chengdu, Sichuan, China.
| | - Chun-An Chou
- Department of Mechanical & Industrial Engineering, Northeastern University, Boston, MA, USA
| | - Nasser Fard
- Department of Mechanical & Industrial Engineering, Northeastern University, Boston, MA, USA
| | - Li Luo
- Department of Industrial Engineering and Management, Business School, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
2
|
Force Tracking Control of Functional Electrical Stimulation via Hybrid Active Disturbance Rejection Control. ELECTRONICS 2022. [DOI: 10.3390/electronics11111727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Stroke is a worldwide disease with a high incidence rate. After surviving a stroke, most patients are left with impaired upper or lower limb. Muscle force training is vital for stroke patients to recover limb function and improve their quality of life. This paper proposes a force tracking control method for upper limb based on functional electrical stimulation (FES), which is a promising rehabilitation approach. A modified Hammerstein model is proposed to describe the nonlinear dynamics of biceps brachii, which consists of a nonlinear mapping function, linear dynamics and time delay component to represent the biochemical process of muscle contraction. A quick model identification method is presented based on the least square algorithm. To deal with the variation of muscle dynamics, a hybrid active disturbance rejection control (ADRC) is proposed to estimate and compensate for the model uncertainty and unmeasured disturbances. The parameter tuning process is given. In the end, the performance of the proposed methods is verified via simulations and experiments. Compared with the Proportional integral derivative controller (PID) method, the proposed methods could suppress the model uncertainty and improve the tracking precision.
Collapse
|
3
|
An Improved Proximal Policy Optimization Method for Low-Level Control of a Quadrotor. ACTUATORS 2022. [DOI: 10.3390/act11040105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this paper, a novel deep reinforcement learning algorithm based on Proximal Policy Optimization (PPO) is proposed to achieve the fixed point flight control of a quadrotor. The attitude and position information of the quadrotor is directly mapped to the PWM signals of the four rotors through neural network control. To constrain the size of policy updates, a PPO algorithm based on Monte Carlo approximations is proposed to achieve the optimal penalty coefficient. A policy optimization method with a penalized point probability distance can provide the diversity of policy by performing each policy update. The new proxy objective function is introduced into the actor–critic network, which solves the problem of PPO falling into local optimization. Moreover, a compound reward function is presented to accelerate the gradient algorithm along the policy update direction by analyzing various states that the quadrotor may encounter in the flight, which improves the learning efficiency of the network. The simulation tests the generalization ability of the offline policy by changing the wing length and payload of the quadrotor. Compared with the PPO method, the proposed method has higher learning efficiency and better robustness.
Collapse
|
4
|
Hu C, Cao W, Ning B. Visual servoing with deep reinforcement learning for rotor unmanned helicopter. INT J ADV ROBOT SYST 2022. [DOI: 10.1177/17298806221084825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Visual servoing is a key approach to achieve visual control for the rotor unmanned helicopter. The challenges of the inaccurate matrix estimation and the target loss restrict the performance of the visual servoing control systems. This work proposes a novel visual servoing controller using the deep Q-network to achieve an efficient matrix estimation. A deep Q-network learning agent learns a policy estimating the interaction matrix for visual servoing of a rotor unmanned helicopter using continuous observation. The observation includes a combination of feature errors. The current matrix and the desired matrix constitute the action space. A well-designed reward guides the deep Q-network agent to get a policy to generate a time-varying linear combination between the current matrix and the desired matrix. Then, the interaction matrix is calculated by the linear combination. The potential mapping between the observation and the interaction matrix is learned by cascading the deep neural network layers. Experimental results show that the proposed method achieves faster convergence and lower target loss probability in tracking than the visual servoing methods with the fixed parameter.
Collapse
Affiliation(s)
- Chunyang Hu
- School of Computer Engineering, Hubei University of Arts and Science, Hubei, China
| | - Wenping Cao
- School of Computer Engineering, Hubei University of Arts and Science, Hubei, China
| | - Bin Ning
- School of Computer Engineering, Hubei University of Arts and Science, Hubei, China
| |
Collapse
|
5
|
Crowder DC, Abreu J, Kirsch RF. Improving the Learning Rate, Accuracy, and Workspace of Reinforcement Learning Controllers for a Musculoskeletal Model of the Human Arm. IEEE Trans Neural Syst Rehabil Eng 2022; 30:30-39. [PMID: 34898436 PMCID: PMC8847021 DOI: 10.1109/tnsre.2021.3135471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Cervical spinal cord injuries frequently cause paralysis of all four limbs - a medical condition known as tetraplegia. Functional electrical stimulation (FES), when combined with an appropriate controller, can be used to restore motor function by electrically stimulating the neuromuscular system. Previous works have demonstrated that reinforcement learning can be used to successfully train FES controllers. Here, we demonstrate that transfer learning and curriculum learning can be used to improve the learning rates, accuracies, and workspaces of FES controllers that are trained using reinforcement learning.
Collapse
|
6
|
Buck C, Ifland S, Stähle P, Thorwarth H. Raiders of the Lost Ark — A Review About the Roots and Application of Artificial Intelligence. INTERNATIONAL JOURNAL OF INNOVATION AND TECHNOLOGY MANAGEMENT 2021. [DOI: 10.1142/s0219877021500450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Artificial Intelligence (AI) receives prominent attention within the innovation context. It is the most promising technological invention in information technology. Nevertheless, Innovation and Technology Management (ITM) so far could not structure the AI field, which offers a disruptive innovative potential. Thus, this paper reviews and analyzes the ITM literature and explains the underlying structure of AI. The findings present two main streams of AI literature and, furthermore, explain how to categorize AI use cases. With our results, we assist ITM in explaining and adopting AI to business, which is a huge challenge for companies.
Collapse
Affiliation(s)
- Christoph Buck
- Centre for Future Enterprise, QUT Business School Queensland University of Technology, Brisbane, Australia, QUT Gardens Point Campus, 2 George St, Brisbane City, QLD 4000, Australia
- Project Group Business & Information Systems Engineering of the Fraunhofer FIT, University of Bayreuth, Universitaetsstraße 30, 95447 Bayreuth, Germany
| | - Sebastian Ifland
- FIM Research Center, University of Bayreuth, Universitaetsstraße 30, 95447 Bayreuth, Germany
- Chair of Combustion Technology, University of Applied Forest Sciences Rottenburg, Schadenweilerhof 72108 Rottenburg a.N., Germany
| | - Philipp Stähle
- EnBW Energy Baden-Württemberg, Schelmenwasenstraße 15, 70567 Stuttgart, Germany
| | - Harald Thorwarth
- Chair of Combustion Technology, University of Applied Forest Sciences Rottenburg, Schadenweilerhof 72108 Rottenburg a.N., Germany
| |
Collapse
|
7
|
Crowder DC, Abreu J, Kirsch RF. Hindsight Experience Replay Improves Reinforcement Learning for Control of a MIMO Musculoskeletal Model of the Human Arm. IEEE Trans Neural Syst Rehabil Eng 2021; 29:1016-1025. [PMID: 33999822 PMCID: PMC8630802 DOI: 10.1109/tnsre.2021.3081056] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
High-level spinal cord injuries often result in paralysis of all four limbs, leading to decreased patient independence and quality of life. Coordinated functional electrical stimulation (FES) of paralyzed muscles can be used to restore some motor function in the upper extremity. To coordinate functional movements, FES controllers should be developed to exploit the complex characteristics of human movement and produce the intended movement kinematics and/or kinetics. Here, we demonstrate the ability of a controller trained using reinforcement learning to generate desired movements of a horizontal planar musculoskeletal model of the human arm with 2 degrees of freedom and 6 actuators. The controller is given information about the kinematics of the arm, but not the internal state of the actuators. In particular, we demonstrate that a technique called "hindsight experience replay" can improve controller performance while also decreasing controller training time.
Collapse
|
8
|
Valizadeh A, Akbari AA. The Optimal Adaptive-Based Neurofuzzy Control of the 3-DOF Musculoskeletal System of Human Arm in a 2D Plane. Appl Bionics Biomech 2021; 2021:5514693. [PMID: 33880132 PMCID: PMC8046574 DOI: 10.1155/2021/5514693] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Revised: 03/04/2021] [Accepted: 03/15/2021] [Indexed: 11/17/2022] Open
Abstract
Each individual performs different daily activities such as reaching and lifting with his hand that shows the important role of robots designed to estimate the position of the objects or the muscle forces. Understanding the body's musculoskeletal system's learning control mechanism can lead us to develop a robust control technique that can be applied to rehabilitation robotics. The musculoskeletal model of the human arm used in this study is a 3-link robot coupled with 6 muscles which a neurofuzzy controller of TSK type along multicritic agents is used for training and learning fuzzy rules. The adaptive critic agents based on reinforcement learning oversees the controller's parameters and avoids overtraining. The simulation results show that in both states of with/without optimization, the controller can well track the desired trajectory smoothly and with acceptable accuracy. The magnitude of forces in the optimized model is significantly lower, implying the controller's correct operation. Also, links take the same trajectory with a lower overall displacement than that of the nonoptimized mode, which is consistent with the hand's natural motion, seeking the most optimum trajectory.
Collapse
Affiliation(s)
- Amin Valizadeh
- Department of Mechanical Engineering, Ferdowsi University of Mashhad, Iran
| | - Ali Akbar Akbari
- Department of Mechanical Engineering, Ferdowsi University of Mashhad, Iran
| |
Collapse
|
9
|
Wu W, Saul KR, Huang HH. Using Reinforcement Learning to Estimate Human Joint Moments From Electromyography or Joint Kinematics: An Alternative Solution to Musculoskeletal-Based Biomechanics. J Biomech Eng 2021; 143:044502. [PMID: 33332536 DOI: 10.1115/1.4049333] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2020] [Indexed: 11/08/2022]
Abstract
Reinforcement learning (RL) has potential to provide innovative solutions to existing challenges in estimating joint moments in motion analysis, such as kinematic or electromyography (EMG) noise and unknown model parameters. Here, we explore feasibility of RL to assist joint moment estimation for biomechanical applications. Forearm and hand kinematics and forearm EMGs from four muscles during free finger and wrist movement were collected from six healthy subjects. Using the proximal policy optimization approach, we trained two types of RL agents that estimated joint moment based on measured kinematics or measured EMGs, respectively. To quantify the performance of trained RL agents, the estimated joint moment was used to drive a forward dynamic model for estimating kinematics, which was then compared with measured kinematics using Pearson correlation coefficient. The results demonstrated that both trained RL agents are feasible to estimate joint moment for wrist and metacarpophalangeal (MCP) joint motion prediction. The correlation coefficients between predicted and measured kinematics, derived from the kinematics-driven agent and subject-specific EMG-driven agents, were 98% ± 1% and 94% ± 3% for the wrist, respectively, and were 95% ± 2% and 84% ± 6% for the metacarpophalangeal joint, respectively. In addition, a biomechanically reasonable joint moment-angle-EMG relationship (i.e., dependence of joint moment on joint angle and EMG) was predicted using only 15 s of collected data. In conclusion, this study illustrates that an RL approach can be an alternative technique to conventional inverse dynamic analysis in human biomechanics study and EMG-driven human-machine interfacing applications.
Collapse
Affiliation(s)
- Wen Wu
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill/North Carolina State University, Raleigh, NC 27695
| | - Katherine R Saul
- Department of Mechanical and Aerospace Engineering, North Carolina State University, Raleigh, NC 27695
| | - He Helen Huang
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill/North Carolina State University, Raleigh, NC 27695
| |
Collapse
|
10
|
Coronato A, Naeem M, De Pietro G, Paragliola G. Reinforcement learning for intelligent healthcare applications: A survey. Artif Intell Med 2020; 109:101964. [PMID: 34756216 DOI: 10.1016/j.artmed.2020.101964] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 09/01/2020] [Accepted: 09/22/2020] [Indexed: 01/08/2023]
Abstract
Discovering new treatments and personalizing existing ones is one of the major goals of modern clinical research. In the last decade, Artificial Intelligence (AI) has enabled the realization of advanced intelligent systems able to learn about clinical treatments and discover new medical knowledge from the huge amount of data collected. Reinforcement Learning (RL), which is a branch of Machine Learning (ML), has received significant attention in the medical community since it has the potentiality to support the development of personalized treatments in accordance with the more general precision medicine vision. This report presents a review of the role of RL in healthcare by investigating past work, and highlighting any limitations and possible future contributions.
Collapse
|
11
|
Wolf DN, Schearer EM. Developing a Quasi-Static Controller for a Paralyzed Human Arm: A Simulation Study. IEEE Int Conf Rehabil Robot 2020; 2019:1153-1158. [PMID: 31374785 DOI: 10.1109/icorr.2019.8779381] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Individuals with paralyzed limbs due to spinal cord injuries lack the ability to perform the reaching motions necessary to every day life. Functional electrical stimulation (FES) is a promising technology for restoring reaching movements to these individuals by reanimating their paralyzed muscles. We have proposed using a quasi-static model-based control strategy to achieve reaching controlled by FES. This method uses a series of static positions to connect the starting wrist position to the goal. As a first step to implementing this controller, we have completed a simulated study using a MATLAB based dynamic model of the arm in order to determine the suitable parameters for the quasi-static controller. The selected distance between static positions in the path was 6 cm, and the amount of time between switching target positions was 1.3 s. The final controller can complete reaches of over 30 cm with a median accuracy of 6.8 cm.
Collapse
|
12
|
Zhang P, Xiong L, Yu Z, Fang P, Yan S, Yao J, Zhou Y. Reinforcement Learning-Based End-to-End Parking for Automatic Parking System. SENSORS 2019; 19:s19183996. [PMID: 31527481 PMCID: PMC6766814 DOI: 10.3390/s19183996] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 09/03/2019] [Accepted: 09/12/2019] [Indexed: 11/16/2022]
Abstract
According to the existing mainstream automatic parking system (APS), a parking path is first planned based on the parking slot detected by the sensors. Subsequently, the path tracking module guides the vehicle to track the planned parking path. However, since the vehicle is non-linear dynamic, path tracking error inevitably occurs, leading to inclination and deviation of the parking. Accordingly, in this paper, a reinforcement learning-based end-to-end parking algorithm is proposed to achieve automatic parking. The vehicle can continuously learn and accumulate experience from numerous parking attempts and then learn the command of the optimal steering wheel angle at different parking slots. Based on this end-to-end parking, errors caused by path tracking can be avoided. Moreover, to ensure that the parking slot can be obtained continuously in the process of learning, a parking slot tracking algorithm is proposed based on the combination of vision and vehicle chassis information. Furthermore, given that the learning network output is hard to converge, and it is easy to fall into local optimum during the parking process, several reinforcement learning training methods in terms of parking conditions are developed. Lastly, by the real vehicle test, it is proved that using the proposed method can achieve a better parking attitude than using the path planning and path tracking-based method.
Collapse
Affiliation(s)
- Peizhi Zhang
- School of Automotive Studies, Tongji University, Shanghai 201804, China.
| | - Lu Xiong
- School of Automotive Studies, Tongji University, Shanghai 201804, China.
| | - Zhuoping Yu
- School of Automotive Studies, Tongji University, Shanghai 201804, China.
| | - Peiyuan Fang
- School of Automotive Studies, Tongji University, Shanghai 201804, China.
| | - Senwei Yan
- SAIC Motor Corporation Limited, Shanghai 201800, China.
| | - Jie Yao
- SAIC Motor Corporation Limited, Shanghai 201800, China.
| | - Yi Zhou
- SAIC Motor Corporation Limited, Shanghai 201800, China.
| |
Collapse
|
13
|
Bao X, Mao ZH, Munro P, Sun Z, Sharma N. Sub-optimally Solving Actuator Redundancy in a Hybrid Neuroprosthetic System with a Multi-layer Neural Network Structure. INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS 2019; 3:298-313. [PMID: 33283042 DOI: 10.1007/s41315-019-00100-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Functional electrical stimulation (FES) has recently been proposed as a supplementary torque assist in lower-limb powered exoskeletons for persons with paraplegia. In the combined system, also known as a hybrid neuroprosthesis, both FES-assist and the exoskeleton act to generate lower-limb torques to achieve standing and walking functions. Due to this actuator redundancy, we are motivated to optimally allocate FES-assist and exoskeleton torque based on a performance index that penalizes FES overuse to minimize muscle fatigue while also minimizing regulation or tracking errors. Traditional optimal control approaches need a system model to optimize; however, it is often difficult to formulate a musculoskeletal model that accurately predicts muscle responses due to FES. In this paper, we use a novel identification and control structure that contains a recurrent neural network (RNN) and several feedforward neural networks (FNNs). The RNN is trained by supervised learning to identify the system dynamics, while the FNNs are trained by a reinforcement learning method to provide sub-optimal control actions. The output layer of each FNN has its unique activation functions, so that the asymmetric constraint of FES and the symmetric constraint of exoskeleton motor control input can be realized. This new structure is experimentally validated on a seated human participant using a single joint hybrid neuroprosthesis.
Collapse
Affiliation(s)
- Xuefeng Bao
- Department of Mechanical Engineering and Materials Science, University of Pittsburgh, Pittsburgh, PA,USA 15261
| | - Zhi-Hong Mao
- Department of Electrical and Computer Engineering and the Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA,USA 15261
| | - Paul Munro
- Department of Electrical and Computer Engineering and the Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA,USA 15261
| | - Ziyue Sun
- Department of Mechanical Engineering and Materials Science, University of Pittsburgh, Pittsburgh, PA,USA 15261
| | - Nitin Sharma
- Department of Mechanical Engineering and Materials Science and the Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA,USA 15261
| |
Collapse
|
14
|
Kim H, Ghergherehchi M, Shin SW, Lee J, Ha D, Namgoong H, Chai JS. The automatic frequency control based on artificial intelligence for compact particle accelerator. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2019; 90:074707. [PMID: 31370502 DOI: 10.1063/1.5086866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2018] [Accepted: 07/03/2019] [Indexed: 06/10/2023]
Abstract
In this work, an advantage actor critic (A2C) based intelligent automatic frequency control (AFC) system was developed for X-band linear accelerator (LINAC). A2C is one type of reinforcement learning, which indicates how software agents should perform actions in an environment. In this paper, the A2C based AFC algorithm and its environment design, simulation result, and controller hardware and software processes are described. The objective of our design is to match LINAC and magnetron frequency, and it is implemented via reward shaping using comparison with the reflected power in the adjacent time. The simulation with the A2C algorithm was implemented in two modes, namely, periodic and White Gaussian Noise (WGN) waves, for analysis with temperature and random disturbance, respectively. In order to create artificial disturbance in the experiment, the magnetron shaft was shifted randomly every 0.5 s by using WGN with 18° angle of the step motor. The standard deviation of the reflected power was 5.63 kW, and the average power was 130.9 kW. To obtain maximum reward at the beginning of the A2C training, the adjacent frequency is needed. The measured average reflected power and standard deviation were 122.8 kW and 1.75 kW, respectively, after 2000 iterations. The results show that the reflected power and standard deviation of the AFC with A2C were lower compared to open-loop with artificial disturbance. The RF station of a medical X-band LINAC was used as the test bench, and the performance was confirmed by the results of an experiment conducted at Sungkyunkwan University in Korea.
Collapse
Affiliation(s)
- Huisu Kim
- Department of Energy Science, Sungkyunkwan University, Suwon 16419, South Korea
| | - Mitra Ghergherehchi
- College of Information and Communication Engineering, Sungkyunkwan University, Suwon 16419, South Korea
| | - Seung-Wook Shin
- College of Information and Communication Engineering, Sungkyunkwan University, Suwon 16419, South Korea
| | - Jongchul Lee
- Department of Energy Science, Sungkyunkwan University, Suwon 16419, South Korea
| | - Donghyup Ha
- College of Information and Communication Engineering, Sungkyunkwan University, Suwon 16419, South Korea
| | - Ho Namgoong
- College of Information and Communication Engineering, Sungkyunkwan University, Suwon 16419, South Korea
| | - Jong Seo Chai
- College of Information and Communication Engineering, Sungkyunkwan University, Suwon 16419, South Korea
| |
Collapse
|
15
|
Combined Sensing, Cognition, Learning, and Control for Developing Future Neuro-Robotics Systems: A Survey. IEEE Trans Cogn Dev Syst 2019. [DOI: 10.1109/tcds.2019.2897618] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
16
|
Sharif Razavian R, Ghannadi B, McPhee J. A Synergy-Based Motor Control Framework for the Fast Feedback Control of Musculoskeletal Systems. J Biomech Eng 2019; 141:2718207. [PMID: 30516245 DOI: 10.1115/1.4042185] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2018] [Indexed: 11/08/2022]
Abstract
This paper presents a computational framework for the fast feedback control of musculoskeletal systems using muscle synergies. The proposed motor control framework has a hierarchical structure. A feedback controller at the higher level of hierarchy handles the trajectory planning and error compensation in the task space. This high-level task space controller only deals with the task-related kinematic variables, and thus is computationally efficient. The output of the task space controller is a force vector in the task space, which is fed to the low-level controller to be translated into muscle activity commands. Muscle synergies are employed to make this force-to-activation (F2A) mapping computationally efficient. The explicit relationship between the muscle synergies and task space forces allows for the fast estimation of muscle activations that result in the reference force. The synergy-enabled F2A mapping replaces a computationally heavy nonlinear optimization process by a vector decomposition problem that is solvable in real time. The estimation performance of the F2A mapping is evaluated by comparing the F2A-estimated muscle activities against the measured electromyography (EMG) data. The results show that the F2A algorithm can estimate the muscle activations using only the task-related kinematics/dynamics information with ∼70% accuracy. An example predictive simulation is also presented, and the results show that this feedback motor control framework can control arbitrary movements of a three-dimensional (3D) musculoskeletal arm model quickly and near optimally. It is two orders-of-magnitude faster than the optimal controller, with only 12% increase in muscle activities compared to the optimal. The developed motor control model can be used for real-time near-optimal predictive control of musculoskeletal system dynamics.
Collapse
Affiliation(s)
- Reza Sharif Razavian
- Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada e-mail:
| | - Borna Ghannadi
- Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada e-mail:
| | - John McPhee
- Fellow ASME Professor Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada e-mail:
| |
Collapse
|
17
|
Wolf DN, Schearer EM. Holding Static Arm Configurations With Functional Electrical Stimulation: A Case Study. IEEE Trans Neural Syst Rehabil Eng 2018; 26:2044-2052. [PMID: 30130233 DOI: 10.1109/tnsre.2018.2866226] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Functional electrical stimulation (FES) is a promising solution for restoring functional motion to individuals with paralysis, but the potential for achieving any desired full-arm reaching motion has not been realized. We present a combined feedforward-feedback controller capable of automatically calculating and applying the necessary muscle stimulations to hold the wrist of an individual with high tetraplegia in a desired static position. We used the controller to hold a complete arm configuration to maintain a series of static wrist positions. The average distance to the target wrist position, or accuracy, was 2.9 cm. The precision is defined as the radius of the 95% confidence ellipsoid for the final positions of a set of trials with the same muscle stimulations and starting position. The average precision was 3.7 cm. The control architecture used in this study to hold static positions has the potential to control arbitrary reaching motions.
Collapse
|
18
|
Disease Diagnosis in Smart Healthcare: Innovation, Technologies and Applications. SUSTAINABILITY 2017. [DOI: 10.3390/su9122309] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|