1
|
Wang C, Pei Z, Fan Y, Qiu S, Tang Z. Review of Vision-Based Environmental Perception for Lower-Limb Exoskeleton Robots. Biomimetics (Basel) 2024; 9:254. [PMID: 38667265 PMCID: PMC11048416 DOI: 10.3390/biomimetics9040254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Revised: 04/13/2024] [Accepted: 04/15/2024] [Indexed: 04/28/2024] Open
Abstract
The exoskeleton robot is a wearable electromechanical device inspired by animal exoskeletons. It combines technologies such as sensing, control, information, and mobile computing, enhancing human physical abilities and assisting in rehabilitation training. In recent years, with the development of visual sensors and deep learning, the environmental perception of exoskeletons has drawn widespread attention in the industry. Environmental perception can provide exoskeletons with a certain level of autonomous perception and decision-making ability, enhance their stability and safety in complex environments, and improve the human-machine-environment interaction loop. This paper provides a review of environmental perception and its related technologies of lower-limb exoskeleton robots. First, we briefly introduce the visual sensors and control system. Second, we analyze and summarize the key technologies of environmental perception, including related datasets, detection of critical terrains, and environment-oriented adaptive gait planning. Finally, we analyze the current factors limiting the development of exoskeleton environmental perception and propose future directions.
Collapse
Affiliation(s)
| | | | | | | | - Zhiyong Tang
- School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China; (C.W.); (Z.P.); (Y.F.); (S.Q.)
| |
Collapse
|
2
|
Oroceo PP, Kim JI, Caliwag EMF, Kim SH, Lim W. Optimizing Face Recognition Inference with a Collaborative Edge-Cloud Network. SENSORS (BASEL, SWITZERLAND) 2022; 22:8371. [PMID: 36366070 PMCID: PMC9658311 DOI: 10.3390/s22218371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 10/26/2022] [Accepted: 10/29/2022] [Indexed: 06/16/2023]
Abstract
The rapid development of deep-learning-based edge artificial intelligence applications and their data-driven nature has led to several research issues. One key issue is the collaboration of the edge and cloud to optimize such applications by increasing inference speed and reducing latency. Some researchers have focused on simulations that verify that a collaborative edge-cloud network would be optimal, but the real-world implementation is not considered. Most researchers focus on the accuracy of the detection and recognition algorithm but not the inference speed in actual deployment. Others have implemented such networks with minimal pressure on the cloud node, thus defeating the purpose of an edge-cloud collaboration. In this study, we propose a method to increase inference speed and reduce latency by implementing a real-time face recognition system in which all face detection tasks are handled on the edge device and by forwarding cropped face images that are significantly smaller than the whole video frame, while face recognition tasks are processed at the cloud. In this system, both devices communicate using the TCP/IP protocol of wireless communication. Our experiment is executed using a Jetson Nano GPU board and a PC as the cloud. This framework is studied in terms of the frame-per-second (FPS) rate. We further compare our framework using two scenarios in which face detection and recognition tasks are deployed on the (1) edge and (2) cloud. The experimental results show that combining the edge and cloud is an effective way to accelerate the inferencing process because the maximum FPS achieved by the edge-cloud deployment was 1.91× more than the cloud deployment and 8.5× more than the edge deployment.
Collapse
Affiliation(s)
- Paul P. Oroceo
- Department of Aeronautics, Mechanical and Electronic Convergence Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea
| | - Jeong-In Kim
- Department of Aeronautics, Mechanical and Electronic Convergence Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea
| | - Ej Miguel Francisco Caliwag
- Department of Aeronautics, Mechanical and Electronic Convergence Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea
| | - Sang-Ho Kim
- Department of Industrial Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea
| | - Wansu Lim
- Department of Aeronautics, Mechanical and Electronic Convergence Engineering, Kumoh National Institute of Technology, Gumi 39177, Korea
| |
Collapse
|
3
|
Semwal A, Lee MMJ, Sanchez D, Teo SL, Wang B, Mohan RE. Object-of-Interest Perception in a Reconfigurable Rolling-Crawling Robot. SENSORS (BASEL, SWITZERLAND) 2022; 22:5214. [PMID: 35890893 PMCID: PMC9315741 DOI: 10.3390/s22145214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 06/15/2022] [Accepted: 06/17/2022] [Indexed: 06/15/2023]
Abstract
Cebrenus Rechenburgi, a member of the huntsman spider family have inspired researchers to adopt different locomotion modes in reconfigurable robotic development. Object-of-interest perception is crucial for such a robot to provide fundamental information on the traversed pathways and guide its locomotion mode transformation. Therefore, we present a object-of-interest perception in a reconfigurable rolling-crawling robot and identifying appropriate locomotion modes. We demonstrate it in Scorpio, our in-house developed robot with two locomotion modes: rolling and crawling. We train the locomotion mode recognition framework, named Pyramid Scene Parsing Network (PSPNet), with a self-collected dataset composed of two categories paths, unobstructed paths (e.g., floor) for rolling and obstructed paths (e.g., with person, railing, stairs, static objects and wall) for crawling, respectively. The efficiency of the proposed framework has been validated with evaluation metrics in offline and real-time field trial tests. The experiment results show that the trained model can achieve an mIOU score of 72.28 and 70.63 in offline and online testing, respectively for both environments. The proposed framework's performance is compared with semantic framework (HRNet and Deeplabv3) where the proposed framework outperforms in terms of mIOU and speed. Furthermore, the experimental results has revealed that the robot's maneuverability is stable, and the proposed framework can successfully determine the appropriate locomotion modes with enhanced accuracy during complex pathways.
Collapse
Affiliation(s)
- Archana Semwal
- Engineering Product Development, Singapore University of Technology and Design, Singapore 487372, Singapore; (A.S.); (M.M.J.L.); (D.S.); (S.L.T.); (R.E.M.)
| | - Melvin Ming Jun Lee
- Engineering Product Development, Singapore University of Technology and Design, Singapore 487372, Singapore; (A.S.); (M.M.J.L.); (D.S.); (S.L.T.); (R.E.M.)
| | - Daniela Sanchez
- Engineering Product Development, Singapore University of Technology and Design, Singapore 487372, Singapore; (A.S.); (M.M.J.L.); (D.S.); (S.L.T.); (R.E.M.)
| | - Sui Leng Teo
- Engineering Product Development, Singapore University of Technology and Design, Singapore 487372, Singapore; (A.S.); (M.M.J.L.); (D.S.); (S.L.T.); (R.E.M.)
| | - Bo Wang
- Information Systems Technology and Design, Singapore University of Technology and Design, Singapore 487372, Singapore
| | - Rajesh Elara Mohan
- Engineering Product Development, Singapore University of Technology and Design, Singapore 487372, Singapore; (A.S.); (M.M.J.L.); (D.S.); (S.L.T.); (R.E.M.)
| |
Collapse
|
4
|
Dhou S, Alnabulsi A, Al-Ali AR, Arshi M, Darwish F, Almaazmi S, Alameeri R. An IoT Machine Learning-Based Mobile Sensors Unit for Visually Impaired People. SENSORS (BASEL, SWITZERLAND) 2022; 22:5202. [PMID: 35890881 PMCID: PMC9316426 DOI: 10.3390/s22145202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 07/01/2022] [Accepted: 07/05/2022] [Indexed: 06/15/2023]
Abstract
Visually impaired people face many challenges that limit their ability to perform daily tasks and interact with the surrounding world. Navigating around places is one of the biggest challenges that face visually impaired people, especially those with complete loss of vision. As the Internet of Things (IoT) concept starts to play a major role in smart cities applications, visually impaired people can be one of the benefitted clients. In this paper, we propose a smart IoT-based mobile sensors unit that can be attached to an off-the-shelf cane, hereafter a smart cane, to facilitate independent movement for visually impaired people. The proposed mobile sensors unit consists of a six-axis accelerometer/gyro, ultrasonic sensors, GPS sensor, cameras, a digital motion processor and a single credit-card-sized single-board microcomputer. The unit is used to collect information about the cane user and the surrounding obstacles while on the move. An embedded machine learning algorithm is developed and stored in the microcomputer memory to identify the detected obstacles and alarm the user about their nature. In addition, in case of emergencies such as a cane fall, the unit alerts the cane user and their guardian. Moreover, a mobile application is developed to be used by the guardian to track the cane user via Google Maps using a mobile handset to ensure safety. To validate the system, a prototype was developed and tested.
Collapse
|
5
|
Laschowski B, McNally W, Wong A, McPhee J. Computer Vision and Deep Learning for Environment-Adaptive Control of Robotic Lower-Limb Exoskeletons. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4631-4635. [PMID: 34892246 DOI: 10.1109/embc46164.2021.9630064] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Robotic exoskeletons require human control and decision making to switch between different locomotion modes, which can be inconvenient and cognitively demanding. To support the development of automated locomotion mode recognition systems (i.e., intelligent high-level controllers), we designed an environment recognition system using computer vision and deep learning. Here we first reviewed the development of the "ExoNet" database - the largest and most diverse open-source dataset of wearable camera images of indoor and outdoor real-world walking environments, which were annotated using a hierarchical labelling architecture. We then trained and tested the EfficientNetB0 convolutional neural network, which was optimized for efficiency using neural architecture search, to forward predict the walking environments. Our environment recognition system achieved ~73% image classification accuracy. These results provide the inaugural benchmark performance on the ExoNet database. Future research should evaluate and compare different convolutional neural networks to develop an accurate and real- time environment-adaptive locomotion mode recognition system for robotic exoskeleton control.
Collapse
|
6
|
Laschowski B, McNally W, Wong A, McPhee J. ExoNet Database: Wearable Camera Images of Human Locomotion Environments. Front Robot AI 2021; 7:562061. [PMID: 33501327 PMCID: PMC7805730 DOI: 10.3389/frobt.2020.562061] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Accepted: 11/06/2020] [Indexed: 12/02/2022] Open
Affiliation(s)
- Brock Laschowski
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - William McNally
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - Alexander Wong
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| | - John McPhee
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|