1
|
Sopidis G, Haslgrübler M, Azadi B, Guiza O, Schobesberger M, Anzengruber-Tanase B, Ferscha A. System Design for Sensing in Manufacturing to Apply AI through Hierarchical Abstraction Levels. SENSORS (BASEL, SWITZERLAND) 2024; 24:4508. [PMID: 39065907 PMCID: PMC11280824 DOI: 10.3390/s24144508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Revised: 07/08/2024] [Accepted: 07/11/2024] [Indexed: 07/28/2024]
Abstract
Activity recognition combined with artificial intelligence is a vital area of research, ranging across diverse domains, from sports and healthcare to smart homes. In the industrial domain, and the manual assembly lines, the emphasis shifts to human-machine interaction and thus to human activity recognition (HAR) within complex operational environments. Developing models and methods that can reliably and efficiently identify human activities, traditionally just categorized as either simple or complex activities, remains a key challenge in the field. Limitations of the existing methods and approaches include their inability to consider the contextual complexities associated with the performed activities. Our approach to address this challenge is to create different levels of activity abstractions, which allow for a more nuanced comprehension of activities and define their underlying patterns. Specifically, we propose a new hierarchical taxonomy for human activity abstraction levels based on the context of the performed activities that can be used in HAR. The proposed hierarchy consists of five levels, namely atomic, micro, meso, macro, and mega. We compare this taxonomy with other approaches that divide activities into simple and complex categories as well as other similar classification schemes and provide real-world examples in different applications to demonstrate its efficacy. Regarding advanced technologies like artificial intelligence, our study aims to guide and optimize industrial assembly procedures, particularly in uncontrolled non-laboratory environments, by shaping workflows to enable structured data analysis and highlighting correlations across various levels throughout the assembly progression. In addition, it establishes effective communication and shared understanding between researchers and industry professionals while also providing them with the essential resources to facilitate the development of systems, sensors, and algorithms for custom industrial use cases that adapt to the level of abstraction.
Collapse
Affiliation(s)
- Georgios Sopidis
- Pro2Future GmbH, Altenberger Strasse 69, 4040 Linz, Austria; (M.H.); (B.A.); (O.G.); (B.A.-T.)
- Institute of Pervasive Computing, Johannes Kepler University, Altenberger Straße 69, 4040 Linz, Austria; (M.S.); (A.F.)
| | - Michael Haslgrübler
- Pro2Future GmbH, Altenberger Strasse 69, 4040 Linz, Austria; (M.H.); (B.A.); (O.G.); (B.A.-T.)
| | - Behrooz Azadi
- Pro2Future GmbH, Altenberger Strasse 69, 4040 Linz, Austria; (M.H.); (B.A.); (O.G.); (B.A.-T.)
| | - Ouijdane Guiza
- Pro2Future GmbH, Altenberger Strasse 69, 4040 Linz, Austria; (M.H.); (B.A.); (O.G.); (B.A.-T.)
| | - Martin Schobesberger
- Institute of Pervasive Computing, Johannes Kepler University, Altenberger Straße 69, 4040 Linz, Austria; (M.S.); (A.F.)
| | | | - Alois Ferscha
- Institute of Pervasive Computing, Johannes Kepler University, Altenberger Straße 69, 4040 Linz, Austria; (M.S.); (A.F.)
| |
Collapse
|
2
|
Khanna P, Ramakrishnan IV, Jain S, Bi X, Balasubramanian A. Hand Gesture Recognition for Blind Users by Tracking 3D Gesture Trajectory. PROCEEDINGS OF THE SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS. CHI CONFERENCE 2024; 2024:405. [PMID: 39781365 PMCID: PMC11707651 DOI: 10.1145/3613904.3642602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2025]
Abstract
Hand gestures provide an alternate interaction modality for blind users and can be supported using commodity smartwatches without requiring specialized sensors. The enabling technology is an accurate gesture recognition algorithm, but almost all algorithms are designed for sighted users. Our study shows that blind user gestures are considerably diferent from sighted users, rendering current recognition algorithms unsuitable. Blind user gestures have high inter-user variance, making learning gesture patterns difcult without large-scale training data. Instead, we design a gesture recognition algorithm that works on a 3D representation of the gesture trajectory, capturing motion in free space. Our insight is to extract a micro-movement in the gesture that is user-invariant and use this micro-movement for gesture classifcation. To this end, we develop an ensemble classifer that combines image classifcation with geometric properties of the gesture. Our evaluation demonstrates a 92% classifcation accuracy, surpassing the next best state-of-the-art which has an accuracy of 82%.
Collapse
|
3
|
Zhai Y, Wu S, Hu Q, Zhou W, Shen Y, Yan X, Ma Y. Influence of grasping postures on skin deformation of hand. Sci Rep 2023; 13:21416. [PMID: 38049461 PMCID: PMC10695991 DOI: 10.1038/s41598-023-48658-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 11/29/2023] [Indexed: 12/06/2023] Open
Abstract
To investigate the influence of different grasping postures on the hand's skin deformation, a handheld 3D EVA SCANNER was used to obtain 3D models of 111 women in five postures, including a straight posture and grasping cylinders with various diameters (4/6/8/10 cm). Skin relaxation strain ratio ([Formula: see text]) and surface area skin relaxation strain ratio ([Formula: see text]) were used as measures of skin deformation between two landmarks and multiple landmarks, respectively. The effects of grasping posture on skin deformation in different directions were analyzed. The results revealed significant variations in skin deformation among different grasping postures, except for the width of middle finger metacarpal and the length of middle finger's proximal phalanx. The [Formula: see text] increased with decreasing grasping object diameter, ranging from 5 to 18% on the coronal axis, and from 4 to 20% on the vertical axis. The overall variation of [Formula: see text] ranged from 5 to 37.5%, following the same trend as [Formula: see text] except for the surface area of tiger's mouth, which exhibited a maximum difference of 10.9% with significant differences. These findings have potential applications in improving the design of hand equipment and understanding hand movement characteristics.
Collapse
Affiliation(s)
- Yanru Zhai
- School of Textile and Clothing, Nantong University, Nantong, 226019, China
| | - Shaoguo Wu
- School of Textile and Clothing, Nantong University, Nantong, 226019, China
| | - Qinyue Hu
- School of Textile and Clothing, Nantong University, Nantong, 226019, China
| | - Wenjing Zhou
- School of Textile and Clothing, Nantong University, Nantong, 226019, China
| | - Yue Shen
- School of Textile and Clothing, Nantong University, Nantong, 226019, China.
| | - Xuefeng Yan
- School of Textile and Clothing, Nantong University, Nantong, 226019, China.
| | - Yan Ma
- School of Textile and Clothing, Nantong University, Nantong, 226019, China.
| |
Collapse
|
4
|
Wang T, Zhao Y, Wang Q. A Flexible Iontronic Capacitive Sensing Array for Hand Gesture Recognition Using Deep Convolutional Neural Networks. Soft Robot 2022. [DOI: 10.1089/soro.2021.0209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Affiliation(s)
- Tiantong Wang
- Department of Advanced Manufacturing and Robotics, College of Engineering, Peking University, Beijing, China
- Beijing Engineering Research Center of Intelligent Rehabilitation Engineering, Beijing, China
| | - Yunbiao Zhao
- Department of Advanced Manufacturing and Robotics, College of Engineering, Peking University, Beijing, China
- Beijing Engineering Research Center of Intelligent Rehabilitation Engineering, Beijing, China
| | - Qining Wang
- Department of Advanced Manufacturing and Robotics, College of Engineering, Peking University, Beijing, China
- Beijing Engineering Research Center of Intelligent Rehabilitation Engineering, Beijing, China
- Institute for Artificial Intelligence, Peking University, Beijing, China
- Beijing Institute for General Artificial Intelligence, Beijing, China
| |
Collapse
|
5
|
Abstract
Inertial-sensor-based attitude estimation is a crucial technology in various applications, from human motion tracking to autonomous aerial and ground vehicles. Application scenarios differ in characteristics of the performed motion, presence of disturbances, and environmental conditions. Since state-of-the-art attitude estimators do not generalize well over these characteristics, their parameters must be tuned for the individual motion characteristics and circumstances. We propose RIANN, a ready-to-use, neural network-based, parameter-free, real-time-capable inertial attitude estimator, which generalizes well across different motion dynamics, environments, and sampling rates, without the need for application-specific adaptations. We gather six publicly available datasets of which we exploit two datasets for the method development and the training, and we use four datasets for evaluation of the trained estimator in three different test scenarios with varying practical relevance. Results show that RIANN outperforms state-of-the-art attitude estimation filters in the sense that it generalizes much better across a variety of motions and conditions in different applications, with different sensor hardware and different sampling frequencies. This is true even if the filters are tuned on each individual test dataset, whereas RIANN was trained on completely separate data and has never seen any of these test datasets. RIANN can be applied directly without adaptations or training and is therefore expected to enable plug-and-play solutions in numerous applications, especially when accuracy is crucial but no ground-truth data is available for tuning or when motion and disturbance characteristics are uncertain. We made RIANN publicly available.
Collapse
|
6
|
Liu T, Wilczyńska D, Lipowski M, Zhao Z. Optimization of a Sports Activity Development Model Using Artificial Intelligence under New Curriculum Reform. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:9049. [PMID: 34501638 PMCID: PMC8431570 DOI: 10.3390/ijerph18179049] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 08/14/2021] [Accepted: 08/24/2021] [Indexed: 11/16/2022]
Abstract
The recent curriculum reform in China puts forward higher requirements for the development of physical education. In order to further improve students' physical quality and motor skills, the traditional model was improved to address the lack of accuracy in motion recognition and detection of physical condition so as to assist teachers to improve students' physical quality. First, the physical education teaching activities required by the new curriculum reform were studied with regard to the actual needs of China's current social, political, and economic development; next, the application of artificial intelligence technology to physical education teaching activities was proposed; and finally, deep learning technology was studied and a human movement recognition model based on a long short-term memory (LSTM) neural network was established to identify the movement state of students in physical education teaching activities. The designed model includes three components: data acquisition, data calculation, and data visualization. The functions of each layer were introduced; then, the intelligent wearable system was adopted to detect the status of students and a feedback system was established to assist teaching; and finally, the dataset was constructed to train and test the designed model. The experimental results demonstrate that the recognition accuracy and loss value of the training model meet the practical requirements; in the algorithm test, the motion recognition accuracy of the designed model for different subjects was greater than 97.5%. Compared with the traditional human motion recognition algorithm, the designed model had a better recognition effect. Hence, the designed model can meet the actual needs of physical education. This exploration provides a new perspective for promoting the intelligent development of physical education.
Collapse
Affiliation(s)
- Taofeng Liu
- School of Physical Education Institute (Main Campus), Zhengzhou University, No. 100 Science Avenue, Zhengzhou 450001, China;
- Department of Physical Education, Sangmyung University, Seoul 390-711, Korea
| | - Dominika Wilczyńska
- Faculty of Physical Culture, Gdansk University of Physical Education and Sport, Kazimierza Górskiego 1, 80-336 Gdańsk, Poland;
| | - Mariusz Lipowski
- Faculty of Physical Culture, Gdansk University of Physical Education and Sport, Kazimierza Górskiego 1, 80-336 Gdańsk, Poland;
| | - Zijian Zhao
- School of Physical Education Institute (Main Campus), Zhengzhou University, No. 100 Science Avenue, Zhengzhou 450001, China;
| |
Collapse
|
7
|
Hand Gesture Recognition on a Resource-Limited Interactive Wristband. SENSORS 2021; 21:s21175713. [PMID: 34502604 PMCID: PMC8434577 DOI: 10.3390/s21175713] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 07/26/2021] [Accepted: 08/01/2021] [Indexed: 11/17/2022]
Abstract
Most of the reported hand gesture recognition algorithms require high computational resources, i.e., fast MCU frequency and significant memory, which are highly inapplicable to the cost-effectiveness of consumer electronics products. This paper proposes a hand gesture recognition algorithm running on an interactive wristband, with computational resource requirements as low as Flash < 5 KB, RAM < 1 KB. Firstly, we calculated the three-axis linear acceleration by fusing accelerometer and gyroscope data with a complementary filter. Then, by recording the order of acceleration vectors crossing axes in the world coordinate frame, we defined a new feature code named axis-crossing code. Finally, we set templates for eight hand gestures to recognize new samples. We compared this algorithm's performance with the widely used dynamic time warping (DTW) algorithm and recurrent neural network (BiLSTM and GRU). The results show that the accuracies of the proposed algorithm and RNNs are higher than DTW and that the time cost of the proposed algorithm is much less than those of DTW and RNNs. The average recognition accuracy is 99.8% on the collected dataset and 97.1% in the actual user-independent case. In general, the proposed algorithm is suitable and competitive in consumer electronics. This work has been volume-produced and patent-granted.
Collapse
|