2
|
Kim T, Shin Y, Kang K, Kim K, Kim G, Byeon Y, Kim H, Gao Y, Lee JR, Son G, Kim T, Jun Y, Kim J, Lee J, Um S, Kwon Y, Son BG, Cho M, Sang M, Shin J, Kim K, Suh J, Choi H, Hong S, Cheng H, Kang HG, Hwang D, Yu KJ. Ultrathin crystalline-silicon-based strain gauges with deep learning algorithms for silent speech interfaces. Nat Commun 2022; 13:5815. [PMID: 36192403 PMCID: PMC9530138 DOI: 10.1038/s41467-022-33457-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Accepted: 09/16/2022] [Indexed: 11/28/2022] Open
Abstract
A wearable silent speech interface (SSI) is a promising platform that enables verbal communication without vocalization. The most widely studied methodology for SSI focuses on surface electromyography (sEMG). However, sEMG suffers from low scalability because of signal quality-related issues, including signal-to-noise ratio and interelectrode interference. Hence, here, we present a novel SSI by utilizing crystalline-silicon-based strain sensors combined with a 3D convolutional deep learning algorithm. Two perpendicularly placed strain gauges with minimized cell dimension (<0.1 mm2) could effectively capture the biaxial strain information with high reliability. We attached four strain sensors near the subject’s mouths and collected strain data of unprecedently large wordsets (100 words), which our SSI can classify at a high accuracy rate (87.53%). Several analysis methods were demonstrated to verify the system’s reliability, as well as the performance comparison with another SSI using sEMG electrodes with the same dimension, which exhibited a relatively low accuracy rate (42.60%). Designing an efficient platform that enables verbal communication without vocalization remains a challenge. Here, the authors propose a silent speech interface by utilizing a deep learning algorithm combined with strain sensors attached near the subject’s mouth, able to collect 100 words and classify at a high accuracy rate.
Collapse
Affiliation(s)
- Taemin Kim
- Functional Bio-integrated Electronics and Energy Management Lab, School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Yejee Shin
- Medical Artificial Intelligence Lab, School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Kyowon Kang
- Functional Bio-integrated Electronics and Energy Management Lab, School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Kiho Kim
- Functional Bio-integrated Electronics and Energy Management Lab, School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Gwanho Kim
- Functional Bio-integrated Electronics and Energy Management Lab, School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Yunsu Byeon
- Medical Artificial Intelligence Lab, School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Hwayeon Kim
- Digital Signal Processing & Artificial Intelligence Lab, School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Yuyan Gao
- Department of Engineering Science and Mechanics, The Pennsylvania State University, University Park, PA, 16802, USA
| | - Jeong Ryong Lee
- Medical Artificial Intelligence Lab, School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Geonhui Son
- Medical Artificial Intelligence Lab, School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Taeseong Kim
- Medical Artificial Intelligence Lab, School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Yohan Jun
- Medical Artificial Intelligence Lab, School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea.,Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Jihyun Kim
- Digital Signal Processing & Artificial Intelligence Lab, School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Jinyoung Lee
- Digital Signal Processing & Artificial Intelligence Lab, School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Seyun Um
- Digital Signal Processing & Artificial Intelligence Lab, School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Yoohwan Kwon
- Digital Signal Processing & Artificial Intelligence Lab, School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Byung Gwan Son
- Digital Signal Processing & Artificial Intelligence Lab, School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Myeongki Cho
- Functional Bio-integrated Electronics and Energy Management Lab, School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Mingyu Sang
- Functional Bio-integrated Electronics and Energy Management Lab, School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Jongwoon Shin
- Functional Bio-integrated Electronics and Energy Management Lab, School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Kyubeen Kim
- Functional Bio-integrated Electronics and Energy Management Lab, School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Jungmin Suh
- Functional Bio-integrated Electronics and Energy Management Lab, School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Heekyeong Choi
- Functional Bio-integrated Electronics and Energy Management Lab, School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Seokjun Hong
- Functional Bio-integrated Electronics and Energy Management Lab, School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Huanyu Cheng
- Department of Engineering Science and Mechanics, The Pennsylvania State University, University Park, PA, 16802, USA
| | - Hong-Goo Kang
- Digital Signal Processing & Artificial Intelligence Lab, School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea.
| | - Dosik Hwang
- Medical Artificial Intelligence Lab, School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea. .,Department of Electrical and Electronic Engineering, YU-Korea Institute of Science and Technology (KIST) Institute, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea.
| | - Ki Jun Yu
- Functional Bio-integrated Electronics and Energy Management Lab, School of Electrical and Electronic Engineering, Yonsei University, 50 Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea. .,Department of Electrical and Electronic Engineering, YU-Korea Institute of Science and Technology (KIST) Institute, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Korea.
| |
Collapse
|
3
|
Amezquita-Garcia J, Bravo-Zanoguera M, Gonzalez-Navarro FF, Lopez-Avitia R, Reyna MA. Applying Machine Learning to Finger Movements Using Electromyography and Visualization in Opensim. SENSORS 2022; 22:s22103737. [PMID: 35632146 PMCID: PMC9144461 DOI: 10.3390/s22103737] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 04/29/2022] [Accepted: 05/01/2022] [Indexed: 01/25/2023]
Abstract
Electromyographic signals have been used with low-degree-of-freedom prostheses, and recently with multifunctional prostheses. Currently, they are also being used as inputs in the human–computer interface that controls interaction through hand gestures. Although there is a gap between academic publications on the control of an upper-limb prosthesis developed in laboratories and its service in the natural environment, there are attempts to achieve easier control using multiple muscle signals. This work contributes to this, using a database and biomechanical simulation software, both open access, to seek simplicity in the classifiers, anticipating their implementation in microcontrollers and their execution in real time. Fifteen predefined finger movements of the hand were identified using classic classifiers such as Bayes, linear and quadratic discriminant analysis. The idealized movements of the database were modeled with Opensim for visualization. Combinations of two preprocessing methods—the forward sequential selection method and the feature normalization method—were evaluated to increase the efficiency of these classifiers. The statistical methods of cross-validation, analysis of variance (ANOVA) and Duncan were used to validate the results. Furthermore, the classifier with the best recognition result was redesigned into a new feature space using the sparse matrix algorithm to improve it, and to determine which features can be eliminated without degrading the classification. The classifiers yielded promising results—the quadratic discriminant being the best, achieving an average recognition rate for each individual considered of 96.16%, and with 78.36% for the total sample group of the eight subjects, in an independent test dataset. The study ends with the visual analysis under Opensim of the classified movements, in which the usefulness of this simulation tool is appreciated by revealing the muscular participation, which can be useful during the design of a multifunctional prosthesis.
Collapse
Affiliation(s)
- Jose Amezquita-Garcia
- Facultad de Ingeniería, Universidad Autónoma de Baja California, Mexicali 21280, Mexico; (J.A.-G.); (R.L.-A.)
| | - Miguel Bravo-Zanoguera
- Facultad de Ingeniería, Universidad Autónoma de Baja California, Mexicali 21280, Mexico; (J.A.-G.); (R.L.-A.)
- Ingeniería en Mecatrónica, Universidad Politécnica de Baja California, Mexicali 21376, Mexico
- Correspondence:
| | - Felix F. Gonzalez-Navarro
- Instituto de Ingeniería, Universidad Autónoma de Baja California, Mexicali 21280, Mexico; (F.F.G.-N.); (M.A.R.)
| | - Roberto Lopez-Avitia
- Facultad de Ingeniería, Universidad Autónoma de Baja California, Mexicali 21280, Mexico; (J.A.-G.); (R.L.-A.)
| | - M. A. Reyna
- Instituto de Ingeniería, Universidad Autónoma de Baja California, Mexicali 21280, Mexico; (F.F.G.-N.); (M.A.R.)
| |
Collapse
|