1
|
Yang X, Hong K, Zhang D, Wang K. Early diagnosis of Alzheimer's Disease based on multi-attention mechanism. PLoS One 2024; 19:e0310966. [PMID: 39316606 PMCID: PMC11421808 DOI: 10.1371/journal.pone.0310966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Accepted: 09/05/2024] [Indexed: 09/26/2024] Open
Abstract
Alzheimer's Disease is a neurodegenerative disorder, and one of its common and prominent early symptoms is language impairment. Therefore, early diagnosis of Alzheimer's Disease through speech and text information is of significant importance. However, the multimodal data is often complex and inconsistent, which leads to inadequate feature extraction. To address the problem, We propose a model for early diagnosis of Alzheimer's Disease based on multimodal attention(EDAMM). Specifically, we first evaluate and select three optimal feature extraction methods, Wav2Vec2.0, TF-IDF and Word2Vec, to extract acoustic and linguistic features. Next, by leveraging self-attention mechanism and cross-modal attention mechanisms, we generate fused features to enhance and capture the inter-modal correlation information. Finally, we concatenate the multimodal features into a composite feature vector and employ a Neural Network(NN) classifier to diagnose Alzheimer's Disease. To evaluate EDAMM, we perform experiments on two public datasets, i.e., NCMMSC2021 and ADReSSo. The results show that EDAMM improves the performance of Alzheimer's Disease diagnosis over state-of-the-art baseline approaches on both datasets.
Collapse
Affiliation(s)
- Xinli Yang
- College of Information Technology, Zhejiang Shuren University, Hangzhou, Zhejiang, China
| | - Kefen Hong
- College of Information Engineering, Huzhou University, Huzhou, Zhejiang, China
| | - Denghui Zhang
- College of Information Technology, Zhejiang Shuren University, Hangzhou, Zhejiang, China
| | - Ke Wang
- College of Information Technology, Zhejiang Shuren University, Hangzhou, Zhejiang, China
| |
Collapse
|
2
|
Abid M, Asif M, Khemane Z, Jawaid A, Waqar Khan A, Naveed H, Naveed T, Farah AA, Siddiq MA. Advances in artificial intelligence for diagnosing Alzheimer's disease through speech. Ann Med Surg (Lond) 2024; 86:3822-3823. [PMID: 38989201 PMCID: PMC11230774 DOI: 10.1097/ms9.0000000000002200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 05/08/2024] [Indexed: 07/12/2024] Open
Affiliation(s)
- Mishal Abid
- Department of Medicine, Dow University of Health Sciences
| | | | - Zoya Khemane
- Department of Medicine, Dow University of Health Sciences
| | - Afia Jawaid
- Department of Medicine, Dow University of Health Sciences
| | | | - Hufsa Naveed
- Department of Medicine, Ziauddin Medical College, Karachi, Pakistan
| | - Tooba Naveed
- Department of Medicine, Ziauddin Medical College, Karachi, Pakistan
| | - Asma Ahmed Farah
- Department of Medicine, East Africa University, Boosaaso, Somalia
| | | |
Collapse
|
3
|
Meng J, Zhang Z, Tang H, Xiao Y, Liu P, Gao S, He M. Evaluation of ChatGPT in providing appropriate fracture prevention recommendations and medical science question responses: A quantitative research. Medicine (Baltimore) 2024; 103:e37458. [PMID: 38489735 PMCID: PMC10939678 DOI: 10.1097/md.0000000000037458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 02/08/2024] [Accepted: 02/12/2024] [Indexed: 03/17/2024] Open
Abstract
Currently, there are limited studies assessing ChatGPT ability to provide appropriate responses to medical questions. Our study aims to evaluate ChatGPT adequacy in responding to questions regarding osteoporotic fracture prevention and medical science. We created a list of 25 questions based on the guidelines and our clinical experience. Additionally, we included 11 medical science questions from the journal Science. Three patients, 3 non-medical professionals, 3 specialist doctor and 3 scientists were involved to evaluate the accuracy and appropriateness of responses by ChatGPT3.5 on October 2, 2023. To simulate a consultation, an inquirer (either a patient or non-medical professional) would send their questions to a consultant (specialist doctor or scientist) via a website. The consultant would forward the questions to ChatGPT for answers, which would then be evaluated for accuracy and appropriateness by the consultant before being sent back to the inquirer via the website for further review. The primary outcome is the appropriate, inappropriate, and unreliable rate of ChatGPT responses as evaluated separately by the inquirer and consultant groups. Compared to orthopedic clinicians, the patients' rating on the appropriateness of ChatGPT responses to the questions about osteoporotic fracture prevention was slightly higher, although the difference was not statistically significant (88% vs 80%, P = .70). For medical science questions, non-medical professionals and medical scientists rated similarly. In addition, the experts' ratings on the appropriateness of ChatGPT responses to osteoporotic fracture prevention and to medical science questions were comparable. On the other hand, the patients perceived that the appropriateness of ChatGPT responses to osteoporotic fracture prevention questions was slightly higher than that to medical science questions (88% vs 72·7%, P = .34). ChatGPT is capable of providing comparable and appropriate responses to medical science questions, as well as to fracture prevention related issues. Both the inquirers seeking advice and the consultants providing advice recognize ChatGPT expertise in these areas.
Collapse
Affiliation(s)
- Jiahao Meng
- Department of Orthopaedics, Xiangya Hospital, Central South University, #87 Xiangya Road, Changsha, Hunan, China
| | - Ziyi Zhang
- Department of Neurology, The Second Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Hang Tang
- Department of Orthopaedics, Xiangya Hospital, Central South University, #87 Xiangya Road, Changsha, Hunan, China
| | - Yifan Xiao
- Department of Orthopaedics, Xiangya Hospital, Central South University, #87 Xiangya Road, Changsha, Hunan, China
| | - Pan Liu
- Department of Orthopaedics, Xiangya Hospital, Central South University, #87 Xiangya Road, Changsha, Hunan, China
| | - Shuguang Gao
- Department of Orthopaedics, Xiangya Hospital, Central South University, #87 Xiangya Road, Changsha, Hunan, China
- National Clinical Research Center of Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Miao He
- Department of Neurology, The Second Xiangya Hospital, Central South University, Changsha, Hunan, China
| |
Collapse
|
4
|
Xu X, Li J, Zhu Z, Zhao L, Wang H, Song C, Chen Y, Zhao Q, Yang J, Pei Y. A Comprehensive Review on Synergy of Multi-Modal Data and AI Technologies in Medical Diagnosis. Bioengineering (Basel) 2024; 11:219. [PMID: 38534493 DOI: 10.3390/bioengineering11030219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Revised: 02/15/2024] [Accepted: 02/21/2024] [Indexed: 03/28/2024] Open
Abstract
Disease diagnosis represents a critical and arduous endeavor within the medical field. Artificial intelligence (AI) techniques, spanning from machine learning and deep learning to large model paradigms, stand poised to significantly augment physicians in rendering more evidence-based decisions, thus presenting a pioneering solution for clinical practice. Traditionally, the amalgamation of diverse medical data modalities (e.g., image, text, speech, genetic data, physiological signals) is imperative to facilitate a comprehensive disease analysis, a topic of burgeoning interest among both researchers and clinicians in recent times. Hence, there exists a pressing need to synthesize the latest strides in multi-modal data and AI technologies in the realm of medical diagnosis. In this paper, we narrow our focus to five specific disorders (Alzheimer's disease, breast cancer, depression, heart disease, epilepsy), elucidating advanced endeavors in their diagnosis and treatment through the lens of artificial intelligence. Our survey not only delineates detailed diagnostic methodologies across varying modalities but also underscores commonly utilized public datasets, the intricacies of feature engineering, prevalent classification models, and envisaged challenges for future endeavors. In essence, our research endeavors to contribute to the advancement of diagnostic methodologies, furnishing invaluable insights for clinical decision making.
Collapse
Affiliation(s)
- Xi Xu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Zhichao Zhu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Linna Zhao
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Huina Wang
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Changwei Song
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Yining Chen
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Qing Zhao
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Jijiang Yang
- Tsinghua National Laboratory for Information Science and Technology, Tsinghua University, Beijing 100084, China
| | - Yan Pei
- School of Computer Science and Engineering, The University of Aizu, Aizuwakamatsu 965-8580, Japan
| |
Collapse
|
5
|
Idrisoglu A, Dallora AL, Anderberg P, Berglund JS. Applied Machine Learning Techniques to Diagnose Voice-Affecting Conditions and Disorders: Systematic Literature Review. J Med Internet Res 2023; 25:e46105. [PMID: 37467031 PMCID: PMC10398366 DOI: 10.2196/46105] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 04/26/2023] [Accepted: 05/23/2023] [Indexed: 07/20/2023] Open
Abstract
BACKGROUND Normal voice production depends on the synchronized cooperation of multiple physiological systems, which makes the voice sensitive to changes. Any systematic, neurological, and aerodigestive distortion is prone to affect voice production through reduced cognitive, pulmonary, and muscular functionality. This sensitivity inspired using voice as a biomarker to examine disorders that affect the voice. Technological improvements and emerging machine learning (ML) technologies have enabled possibilities of extracting digital vocal features from the voice for automated diagnosis and monitoring systems. OBJECTIVE This study aims to summarize a comprehensive view of research on voice-affecting disorders that uses ML techniques for diagnosis and monitoring through voice samples where systematic conditions, nonlaryngeal aerodigestive disorders, and neurological disorders are specifically of interest. METHODS This systematic literature review (SLR) investigated the state of the art of voice-based diagnostic and monitoring systems with ML technologies, targeting voice-affecting disorders without direct relation to the voice box from the point of view of applied health technology. Through a comprehensive search string, studies published from 2012 to 2022 from the databases Scopus, PubMed, and Web of Science were scanned and collected for assessment. To minimize bias, retrieval of the relevant references in other studies in the field was ensured, and 2 authors assessed the collected studies. Low-quality studies were removed through a quality assessment and relevant data were extracted through summary tables for analysis. The articles were checked for similarities between author groups to prevent cumulative redundancy bias during the screening process, where only 1 article was included from the same author group. RESULTS In the analysis of the 145 included studies, support vector machines were the most utilized ML technique (51/145, 35.2%), with the most studied disease being Parkinson disease (PD; reported in 87/145, 60%, studies). After 2017, 16 additional voice-affecting disorders were examined, in contrast to the 3 investigated previously. Furthermore, an upsurge in the use of artificial neural network-based architectures was observed after 2017. Almost half of the included studies were published in last 2 years (2021 and 2022). A broad interest from many countries was observed. Notably, nearly one-half (n=75) of the studies relied on 10 distinct data sets, and 11/145 (7.6%) used demographic data as an input for ML models. CONCLUSIONS This SLR revealed considerable interest across multiple countries in using ML techniques for diagnosing and monitoring voice-affecting disorders, with PD being the most studied disorder. However, the review identified several gaps, including limited and unbalanced data set usage in studies, and a focus on diagnostic test rather than disorder-specific monitoring. Despite the limitations of being constrained by only peer-reviewed publications written in English, the SLR provides valuable insights into the current state of research on ML-based voice-affecting disorder diagnosis and monitoring and highlighting areas to address in future research.
Collapse
Affiliation(s)
- Alper Idrisoglu
- Department of Health, Blekinge Institute of Technology, Karslkrona, Sweden
| | - Ana Luiza Dallora
- Department of Health, Blekinge Institute of Technology, Karslkrona, Sweden
| | - Peter Anderberg
- Department of Health, Blekinge Institute of Technology, Karslkrona, Sweden
- School of Health Sciences, University of Skövde, Skövde, Sweden
| | | |
Collapse
|
6
|
Tang L, Zhang Z, Feng F, Yang LZ, Li H. Explainable Alzheimer's Disease Detection Using Linguistic Features from Automatic Speech Recognition. Dement Geriatr Cogn Disord 2023; 52:240-248. [PMID: 37433284 DOI: 10.1159/000531818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 06/29/2023] [Indexed: 07/13/2023] Open
Abstract
INTRODUCTION Alzheimer's disease (AD) is the most prevalent type of dementia and can cause abnormal cognitive function and progressive loss of essential life skills. Early screening is thus necessary for the prevention and intervention of AD. Speech dysfunction is an early onset symptom of AD patients. Recent studies have demonstrated the promise of automated acoustic assessment using acoustic or linguistic features extracted from speech. However, most previous studies have relied on manual transcription of text to extract linguistic features, which weakens the efficiency of automated assessment. The present study thus investigates the effectiveness of automatic speech recognition (ASR) in building an end-to-end automated speech analysis model for AD detection. METHODS We implemented three publicly available ASR engines and compared the classification performance using the ADReSS-IS2020 dataset. Besides, the SHapley Additive exPlanations algorithm was then used to identify critical features that contributed most to model performance. RESULTS Three automatic transcription tools obtained mean word error rate texts of 32%, 43%, and 40%, respectively. These automated texts achieved similar or even better results than manual texts in model performance for detecting dementia, achieving classification accuracies of 89.58%, 83.33%, and 81.25%, respectively. CONCLUSION Our best model, using ensemble learning, is comparable to the state-of-the-art manual transcription-based methods, suggesting the possibility of an end-to-end medical assistance system for AD detection with ASR engines. Moreover, the critical linguistic features might provide insight into further studies on the mechanism of AD.
Collapse
Affiliation(s)
- Lijuan Tang
- Institutes of Physical Science and Information Technology, Anhui University, Hefei, China
- Anhui Province Key Laboratory of Medical Physics and Technology, Institute of Health and Medical Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China
| | - Zhenglin Zhang
- Anhui Province Key Laboratory of Medical Physics and Technology, Institute of Health and Medical Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China
- Hefei Cancer Hospital, Chinese Academy of Sciences, Hefei, China
- University of Science and Technology of China, Hefei, China
| | - Feifan Feng
- Anhui Province Key Laboratory of Medical Physics and Technology, Institute of Health and Medical Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China
- Department of Biomedical Engineering, Anhui Medical University, Hefei, China
| | - Li-Zhuang Yang
- Anhui Province Key Laboratory of Medical Physics and Technology, Institute of Health and Medical Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China
- Hefei Cancer Hospital, Chinese Academy of Sciences, Hefei, China
- University of Science and Technology of China, Hefei, China
| | - Hai Li
- Anhui Province Key Laboratory of Medical Physics and Technology, Institute of Health and Medical Technology, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China
- Hefei Cancer Hospital, Chinese Academy of Sciences, Hefei, China
- University of Science and Technology of China, Hefei, China
| |
Collapse
|
7
|
Vaaras E, Airaksinen M, Vanhatalo S, Rasanen O. Evaluation of self-supervised pre-training for automatic infant movement classification using wearable movement sensors. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-6. [PMID: 38083169 DOI: 10.1109/embc40787.2023.10340118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
The recently-developed infant wearable MAIJU provides a means to automatically evaluate infants' motor performance in an objective and scalable manner in out-of-hospital settings. This information could be used for developmental research and to support clinical decision-making, such as detection of developmental problems and guiding of their therapeutic interventions. MAIJU-based analyses rely fully on the classification of infant's posture and movement; it is hence essential to study ways to increase the accuracy of such classifications, aiming to increase the reliability and robustness of the automated analysis. Here, we investigated how self-supervised pre-training improves performance of the classifiers used for analyzing MAIJU recordings, and we studied whether performance of the classifier models is affected by context-selective quality-screening of pre-training data to exclude periods of little infant movement or with missing sensors. Our experiments show that i) pre-training the classifier with unlabeled data leads to a robust accuracy increase of subsequent classification models, and ii) selecting context-relevant pre-training data leads to substantial further improvements in the classifier performance.Clinical relevance- This study showcases that self-supervised learning can be used to increase the accuracy of out-of-hospital evaluation of infants' motor abilities via smart wearables.
Collapse
|
8
|
Contreras RC, Viana MS, Fonseca ES, Dos Santos FL, Zanin RB, Guido RC. An Experimental Analysis on Multicepstral Projection Representation Strategies for Dysphonia Detection. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115196. [PMID: 37299922 DOI: 10.3390/s23115196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 05/20/2023] [Accepted: 05/23/2023] [Indexed: 06/12/2023]
Abstract
Biometrics-based authentication has become the most well-established form of user recognition in systems that demand a certain level of security. For example, the most commonplace social activities stand out, such as access to the work environment or to one's own bank account. Among all biometrics, voice receives special attention due to factors such as ease of collection, the low cost of reading devices, and the high quantity of literature and software packages available for use. However, these biometrics may have the ability to represent the individual impaired by the phenomenon known as dysphonia, which consists of a change in the sound signal due to some disease that acts on the vocal apparatus. As a consequence, for example, a user with the flu may not be properly authenticated by the recognition system. Therefore, it is important that automatic voice dysphonia detection techniques be developed. In this work, we propose a new framework based on the representation of the voice signal by the multiple projection of cepstral coefficients to promote the detection of dysphonic alterations in the voice through machine learning techniques. Most of the best-known cepstral coefficient extraction techniques in the literature are mapped and analyzed separately and together with measures related to the fundamental frequency of the voice signal, and its representation capacity is evaluated on three classifiers. Finally, the experiments on a subset of the Saarbruecken Voice Database prove the effectiveness of the proposed material in detecting the presence of dysphonia in the voice.
Collapse
Affiliation(s)
- Rodrigo Colnago Contreras
- Department of Computer Science and Statistics, Institute of Biosciences, Letters and Exact Sciences, São Paulo State University, São José do Rio Preto 15054-000, SP, Brazil
| | | | | | | | - Rodrigo Bruno Zanin
- Faculty of Architecture and Engineering, Mato Grosso State University, Cáceres 78217-900, MT, Brazil
| | - Rodrigo Capobianco Guido
- Department of Computer Science and Statistics, Institute of Biosciences, Letters and Exact Sciences, São Paulo State University, São José do Rio Preto 15054-000, SP, Brazil
| |
Collapse
|
9
|
Liu J, Fu F, Li L, Yu J, Zhong D, Zhu S, Zhou Y, Liu B, Li J. Efficient Pause Extraction and Encode Strategy for Alzheimer's Disease Detection Using Only Acoustic Features from Spontaneous Speech. Brain Sci 2023; 13:477. [PMID: 36979287 PMCID: PMC10046767 DOI: 10.3390/brainsci13030477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/06/2023] [Accepted: 03/10/2023] [Indexed: 03/14/2023] Open
Abstract
Clinical studies have shown that speech pauses can reflect the cognitive function differences between Alzheimer's Disease (AD) and non-AD patients, while the value of pause information in AD detection has not been fully explored. Herein, we propose a speech pause feature extraction and encoding strategy for only acoustic-signal-based AD detection. First, a voice activity detection (VAD) method was constructed to detect pause/non-pause feature and encode it to binary pause sequences that are easier to calculate. Then, an ensemble machine-learning-based approach was proposed for the classification of AD from the participants' spontaneous speech, based on the VAD Pause feature sequence and common acoustic feature sets (ComParE and eGeMAPS). The proposed pause feature sequence was verified in five machine-learning models. The validation data included two public challenge datasets (ADReSS and ADReSSo, English voice) and a local dataset (10 audio recordings containing five patients and five controls, Chinese voice). Results showed that the VAD Pause feature was more effective than common feature sets (ComParE: 6373 features and eGeMAPS: 88 features) for AD classification, and that the ensemble method improved the accuracy by more than 5% compared to several baseline methods (8% on the ADReSS dataset; 5.9% on the ADReSSo dataset). Moreover, the pause-sequence-based AD detection method could achieve 80% accuracy on the local dataset. Our study further demonstrated the potential of pause information in speech-based AD detection, and also contributed to a more accessible and general pause feature extraction and encoding method for AD detection.
Collapse
Affiliation(s)
- Jiamin Liu
- Jiangsu Province Engineering Research Center of Smart Wearable and Rehabilitation Devices, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China
| | - Fan Fu
- Jiangsu Province Engineering Research Center of Smart Wearable and Rehabilitation Devices, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China
| | - Liang Li
- Jiangsu Province Engineering Research Center of Smart Wearable and Rehabilitation Devices, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China
| | - Junxiao Yu
- Jiangsu Province Engineering Research Center of Smart Wearable and Rehabilitation Devices, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China
| | - Dacheng Zhong
- Jiangsu Province Engineering Research Center of Smart Wearable and Rehabilitation Devices, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China
| | - Songsheng Zhu
- Jiangsu Province Engineering Research Center of Smart Wearable and Rehabilitation Devices, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China
| | - Yuxuan Zhou
- Jiangsu Province Engineering Research Center of Smart Wearable and Rehabilitation Devices, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China
| | - Bin Liu
- Jiangsu Province Engineering Research Center of Smart Wearable and Rehabilitation Devices, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China
| | - Jianqing Li
- Jiangsu Province Engineering Research Center of Smart Wearable and Rehabilitation Devices, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China
- The State Key Laboratory of Bioelectronics, School of Instrument Science and Engineering, Southeast University, Nanjing 211166, China
| |
Collapse
|