1
|
Netto AFA, Zanotelli T, Felix LB. Multi-channel and multi-harmonic analysis of Auditory Steady-State Response detection. Comput Methods Biomech Biomed Engin 2024; 27:276-284. [PMID: 36803329 DOI: 10.1080/10255842.2023.2181041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 01/17/2023] [Accepted: 02/09/2023] [Indexed: 02/22/2023]
Abstract
The Auditory Steady-State Response (ASSR) is a type of auditory evoked potential (AEP) generated in the auditory system that can be automatically detected by means of objective response detectors (ORDs). ASSRs are usually registered on the scalp using electroencephalography (EEG). ORD are univariate techniques, i.e. only uses one data channel. However, techniques involving more than one channel - multi-channel objective response detectors (MORDs) - have been showing higher detection rate (DR) when compared to ORD techniques. When ASSR is evoked by amplitude stimuli, the responses could be detected by analyzing the modulation frequencies and their harmonics. Despite this, ORD techniques are traditionally applied only in its first harmonic. This approach is known as one-sample test. The q-sample tests, however, considers harmonics beyond the first. Thus, this work proposes and evaluates the use of q-sample tests using a combination of multiple EEG channels and multiple harmonics of the stimulation frequencies and compare them with traditional one-sample tests. The database used consists of EEG channels from 24 volunteers with normal auditory threshold collected following a binaural stimulation protocol by amplitude modulated (AM) tone with modulating frequencies near 80 Hz. The best q-sample MORD result showed an increase in DR of 45.25% when compared with the best one-sample ORD test. Thus, it is recommended to use multiple channels and multiple harmonics, whenever available.
Collapse
Affiliation(s)
| | - Tiago Zanotelli
- Federal Institute of Education Science and Technology of Espírito Santo-São Mateus, São Mateus, ES, Brazil
| | - Leonardo Bonato Felix
- Department of Electrical Engineering, Federal University of Viçosa, Viçosa, MG, Brazil
| |
Collapse
|
2
|
Wimalarathna H, Ankmnal-Veeranna S, Allan C, Agrawal SK, Samarabandu J, Ladak HM, Allen P. Machine learning approaches used to analyze auditory evoked responses from the human auditory brainstem: A systematic review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107118. [PMID: 36122495 DOI: 10.1016/j.cmpb.2022.107118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Revised: 08/01/2022] [Accepted: 09/06/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND The application of machine learning algorithms for assessing the auditory brainstem response has gained interest over recent years with a considerable number of publications in the literature. In this systematic review, we explore how machine learning has been used to develop algorithms to assess auditory brainstem responses. A clear and comprehensive overview is provided to allow clinicians and researchers to explore the domain and the potential translation to clinical care. METHODS The systematic review was performed based on PRISMA guidelines. A search was conducted of PubMed, IEEE-Xplore, and Scopus databases focusing on human studies that have used machine learning to assess auditory brainstem responses. The duration of the search was from January 1, 1990, to April 3, 2021. The Covidence systematic review platform (www.covidence.org) was used throughout the process. RESULTS A total of 5812 studies were found through the database search and 451 duplicates were removed. The title and abstract screening process further reduced the article count to 89 and in the proceeding full-text screening, 34 articles met our full inclusion criteria. CONCLUSION Three categories of applications were found, namely neurologic diagnosis, hearing threshold estimation, and other (does not relate to neurologic or hearing threshold estimation). Neural networks and support vector machines were the most commonly used machine learning algorithms in all three categories. Only one study had conducted a clinical trial to evaluate the algorithm after development. Challenges remain in the amount of data required to train machine learning models. Suggestions for future research avenues are mentioned with recommended reporting methods for researchers.
Collapse
Affiliation(s)
- Hasitha Wimalarathna
- Department of Electrical & Computer Engineering, Western University, London, Ontario, Canada; National Centre for Audiology, Western University, London, Ontario, Canada.
| | - Sangamanatha Ankmnal-Veeranna
- National Centre for Audiology, Western University, London, Ontario, Canada; College of Nursing and Health Professions, School of Speech and Hearing Sciences, The University of Southern Mississippi, J.B. George Building, Hattiesburg, MS, USA
| | - Chris Allan
- National Centre for Audiology, Western University, London, Ontario, Canada; School of Communication Sciences & Disorders, Western University, London, Ontario, Canada
| | - Sumit K Agrawal
- Department of Electrical & Computer Engineering, Western University, London, Ontario, Canada; National Centre for Audiology, Western University, London, Ontario, Canada; School of Biomedical Engineering, Western University, London, Ontario, Canada; Department of Medical Biophysics, Western University, London, Ontario, Canada; Department of Otolaryngology - Head and Neck Surgery, Western University, London, Ontario, Canada
| | - Jagath Samarabandu
- Department of Electrical & Computer Engineering, Western University, London, Ontario, Canada
| | - Hanif M Ladak
- Department of Electrical & Computer Engineering, Western University, London, Ontario, Canada; National Centre for Audiology, Western University, London, Ontario, Canada; School of Biomedical Engineering, Western University, London, Ontario, Canada; Department of Medical Biophysics, Western University, London, Ontario, Canada; Department of Otolaryngology - Head and Neck Surgery, Western University, London, Ontario, Canada
| | - Prudence Allen
- National Centre for Audiology, Western University, London, Ontario, Canada; School of Communication Sciences & Disorders, Western University, London, Ontario, Canada
| |
Collapse
|
8
|
Biometrics Using Electroencephalograms Stimulated by Personal Ultrasound and Multidimensional Nonlinear Features. ELECTRONICS 2019. [DOI: 10.3390/electronics9010024] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Biometrics such as fingerprints and iris scans has been used in authentication. However, conventional biometrics is vulnerable to identity theft, especially in user-management systems. As a new biometrics without this vulnerability, brain waves have been a focus. In this paper, brain waves (electroencephalograms (EEGs)) were measured from ten experiment subjects. Individual features were extracted from the log power spectra of the EEGs using principal component analysis, and verification was achieved using a support vector machine. It was found that, for the proposed authentication method, the equal error rate (EER) for a single electrode was about 22–32%, and that, for a multiple electrodes, was 4.4% by using the majority decision rule. Furthermore, nonlinear features based on chaos analysis were introduced for feature extraction and then extended to multidimensional ones. By fusing the results of all electrodes when using the proposed multidimensional nonlinear features and the spectral feature, an EER of 0% was achieved. As a result, it was confirmed that individuals can be authenticated using induced brain waves when they are subjected to ultrasounds.
Collapse
|