1
|
Puertollano M, Ribas-Prats T, Gorina-Careta N, Ijjou-Kadiri S, Arenillas-Alcón S, Mondéjar-Segovia A, Dolores Gómez-Roig M, Escera C. Longitudinal trajectories of the neural encoding mechanisms of speech-sound features during the first year of life. BRAIN AND LANGUAGE 2024; 258:105474. [PMID: 39326253 DOI: 10.1016/j.bandl.2024.105474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 09/10/2024] [Accepted: 09/12/2024] [Indexed: 09/28/2024]
Abstract
Infants quickly recognize the sounds of their mother language, perceiving the spectrotemporal acoustic features of speech. However, the underlying neural machinery remains unclear. We used an auditory evoked potential termed frequency-following response (FFR) to unravel the neural encoding maturation for two speech sound characteristics: voice pitch and temporal fine structure. 37 healthy-term neonates were tested at birth and retested at the ages of six and twelve months. Results revealed a reduction in neural phase-locking onset to the stimulus envelope from birth to six months, stabilizing by twelve months. While neural encoding of voice pitch remained consistent across ages, temporal fine structure encoding matured rapidly from birth to six months, without further improvement from six to twelve months. Results highlight the critical importance of the first six months of life in the maturation of neural encoding mechanisms that are crucial for phoneme discrimination during early language acquisition.
Collapse
Affiliation(s)
- Marta Puertollano
- Brainlab - Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain; Institute of Neurosciences, University of Barcelona, Catalonia, Spain; Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| | - Teresa Ribas-Prats
- Brainlab - Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain; Institute of Neurosciences, University of Barcelona, Catalonia, Spain; Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| | - Natàlia Gorina-Careta
- Brainlab - Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain; Institute of Neurosciences, University of Barcelona, Catalonia, Spain; Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| | - Siham Ijjou-Kadiri
- Brainlab - Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain; Institute of Neurosciences, University of Barcelona, Catalonia, Spain; Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| | - Sonia Arenillas-Alcón
- Brainlab - Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain; Institute of Neurosciences, University of Barcelona, Catalonia, Spain; Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| | - Alejandro Mondéjar-Segovia
- Brainlab - Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain; Institute of Neurosciences, University of Barcelona, Catalonia, Spain; Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| | - María Dolores Gómez-Roig
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain; BCNatal - Barcelona Center for Maternal Fetal and Neonatal Medicine, Hospital Sant Joan de Déu and Hospital Clínic), University of Barcelona, Catalonia, Spain
| | - Carles Escera
- Brainlab - Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain; Institute of Neurosciences, University of Barcelona, Catalonia, Spain; Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain.
| |
Collapse
|
2
|
Akbulut AA, Karaman Demirel A, Çiprut A. Music Perception and Music-Related Quality of Life in Adult Cochlear Implant Users: Exploring the Need for Music Rehabilitation. Ear Hear 2024:00003446-990000000-00342. [PMID: 39256903 DOI: 10.1097/aud.0000000000001580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
OBJECTIVES Cochlear implant (CI) users face difficulties in accurately perceiving basic musical elements such as pitch, melody, and timbre. Music significantly affects the quality of life (QoL) of CI users. Individually and culturally influenced music perception exceeds psychophysical measures in capturing the subjective music enjoyment of CI users. Understanding the music perception, enjoyment, and habits of CI users is crucial for approaches to improve music-related QoL (MuRQoL). Therefore, this study aims to investigate music perception skills, experiences, and participation in music activities in a large group of adult CI users, and to understand the importance of these factors and their impact on QoL of CI users. DESIGN This study included 214 CI recipients with diverse auditory experiences who were aged between 18 and 65 years and were unilateral, bimodal, or bilateral users for at least 1 year and 193 normal hearing (NH) controls. All participants completed the information forms and the MuRQoL questionnaire. To assess the impact of music on QoL and identify personalized rehabilitation needs, the scores for each question in both parts of the questionnaire were intersected on a matrix. Data were presented in detail for the CI group and compared between CI and NH groups. RESULTS A statistically significant difference was found between the matched CI and NH groups in favor of the NH group in terms of music perception and music engagement. Participants who received music education at any point in their lives had significantly higher MuRQoL questionnaire scores. There was no significant relationship found between the duration of auditory rehabilitation, pre-CI hearing aid usage, music listening modality, and MuRQoL questionnaire scores. Unilateral CI users had significantly lower scores in music perception and music engagement subsections compared with bimodal and bilateral CI users. Also, it was found that music had a strong negative impact on QoL in 67/214 of the CI users. CONCLUSIONS Although CI users scored significantly lower than NH individuals on the first part of the questionnaire, which asked about musical skills, enjoyment, and participation in musical activities, findings suggest that CI users value music and music enjoyment just as much. The study reveals the influence of factors such as education level, age, music education, type of hearing loss and auditory rehabilitation on music perception, music enjoyment, and participation in music activities through self-report. The results indicate that for many CI users, music has a strong negative impact on QoL, highlighting the need for personalized music interventions, the inclusion of self-report questionnaires, and music perception tests in clinical evaluations.
Collapse
Affiliation(s)
- Ahmet Alperen Akbulut
- Department of Audiology, Hamidiye Faculty of Health Sciences, University of Health Sciences, Istanbul, Türkiye
- Department of Otorhinolaryngology, Audiology and Speech Disorders PhD Program, Institute of Health Sciences, Marmara University, Istanbul, Türkiye
| | - Ayşenur Karaman Demirel
- Department of Otorhinolaryngology, Audiology and Speech Disorders PhD Program, Institute of Health Sciences, Marmara University, Istanbul, Türkiye
- Vocational School of Health Services, Istanbul Okan University, Istanbul, Türkiye
| | - Ayça Çiprut
- Department of Audiology, Faculty of Medicine, Marmara University, Istanbul, Türkiye
| |
Collapse
|
3
|
Jahn KN, Wiegand-Shahani BM, Moturi V, Kashiwagura ST, Doak KR. Cochlear-implant simulated spectral degradation attenuates emotional responses to environmental sounds. Int J Audiol 2024:1-7. [PMID: 39146030 DOI: 10.1080/14992027.2024.2385552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 07/22/2024] [Indexed: 08/17/2024]
Abstract
OBJECTIVE Cochlear implants (CI) provide users with a spectrally degraded acoustic signal that could impact their auditory emotional experiences. This study evaluated the effects of CI-simulated spectral degradation on emotional valence and arousal elicited by environmental sounds. DESIGN Thirty emotionally evocative sounds were filtered through a noise-band vocoder. Participants rated the perceived valence and arousal elicited by each of the full-spectrum and vocoded stimuli. These ratings were compared across acoustic conditions (full-spectrum, vocoded) and as a function of stimulus type (unpleasant, neutral, pleasant). STUDY SAMPLE Twenty-five young adults (age 19 to 34 years) with normal hearing. RESULTS Emotional responses were less extreme for spectrally degraded (i.e., vocoded) sounds than for full-spectrum sounds. Specifically, spectrally degraded stimuli were perceived as more negative and less arousing than full-spectrum stimuli. CONCLUSION By meticulously replicating CI spectral degradation while controlling for variables that are confounded within CI users, these findings indicate that CI spectral degradation can compress the range of sound-induced emotion independent of hearing loss and other idiosyncratic device- or person-level variables. Future work will characterize emotional reactions to sound in CI users via objective, psychoacoustic, and subjective measures.
Collapse
Affiliation(s)
- Kelly N Jahn
- Department of Speech, Language, and Hearing, The University of Texas at Dallas, Richardson, TX, USA
- Callier Center for Communication Disorders, The University of Texas at Dallas, Dallas, TX, USA
| | - Braden M Wiegand-Shahani
- Department of Speech, Language, and Hearing, The University of Texas at Dallas, Richardson, TX, USA
- Callier Center for Communication Disorders, The University of Texas at Dallas, Dallas, TX, USA
| | - Vaishnavi Moturi
- Department of Speech, Language, and Hearing, The University of Texas at Dallas, Richardson, TX, USA
| | - Sean Takamoto Kashiwagura
- Department of Speech, Language, and Hearing, The University of Texas at Dallas, Richardson, TX, USA
- Callier Center for Communication Disorders, The University of Texas at Dallas, Dallas, TX, USA
| | - Karlee R Doak
- Department of Speech, Language, and Hearing, The University of Texas at Dallas, Richardson, TX, USA
- Callier Center for Communication Disorders, The University of Texas at Dallas, Dallas, TX, USA
| |
Collapse
|
4
|
McFarlane KA, Sanchez JT. Effects of Temporal Processing on Speech-in-Noise Perception in Middle-Aged Adults. BIOLOGY 2024; 13:371. [PMID: 38927251 PMCID: PMC11200514 DOI: 10.3390/biology13060371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 05/20/2024] [Accepted: 05/21/2024] [Indexed: 06/28/2024]
Abstract
Auditory temporal processing is a vital component of auditory stream segregation, or the process in which complex sounds are separated and organized into perceptually meaningful objects. Temporal processing can degrade prior to hearing loss, and is suggested to be a contributing factor to difficulties with speech-in-noise perception in normal-hearing listeners. The current study tested this hypothesis in middle-aged adults-an under-investigated cohort, despite being the age group where speech-in-noise difficulties are first reported. In 76 participants, three mechanisms of temporal processing were measured: peripheral auditory nerve function using electrocochleography, subcortical encoding of periodic speech cues (i.e., fundamental frequency; F0) using the frequency following response, and binaural sensitivity to temporal fine structure (TFS) using a dichotic frequency modulation detection task. Two measures of speech-in-noise perception were administered to explore how contributions of temporal processing may be mediated by different sensory demands present in the speech perception task. This study supported the hypothesis that temporal coding deficits contribute to speech-in-noise difficulties in middle-aged listeners. Poorer speech-in-noise perception was associated with weaker subcortical F0 encoding and binaural TFS sensitivity, but in different contexts, highlighting that diverse aspects of temporal processing are differentially utilized based on speech-in-noise task characteristics.
Collapse
Affiliation(s)
- Kailyn A. McFarlane
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL 60208, USA;
| | - Jason Tait Sanchez
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL 60208, USA;
- Knowles Hearing Center, Northwestern University, Evanston, IL 60208, USA
- Department of Neurobiology, Northwestern University, Evanston, IL 60208, USA
| |
Collapse
|
5
|
Ribas-Prats T, Arenillas-Alcón S, Martínez SIF, Gómez-Roig MD, Escera C. The frequency-following response in late preterm neonates: a pilot study. Front Psychol 2024; 15:1341171. [PMID: 38784610 PMCID: PMC11112609 DOI: 10.3389/fpsyg.2024.1341171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 04/23/2024] [Indexed: 05/25/2024] Open
Abstract
Introduction Infants born very early preterm are at high risk of language delays. However, less is known about the consequences of late prematurity. Hence, the aim of the present study is to characterize the neural encoding of speech sounds in late preterm neonates in comparison with those born at term. Methods The speech-evoked frequency-following response (FFR) was recorded to a consonant-vowel stimulus /da/ in 36 neonates in three different groups: 12 preterm neonates [mean gestational age (GA) 36.05 weeks], 12 "early term neonates" (mean GA 38.3 weeks), and "late term neonates" (mean GA 41.01 weeks). Results From the FFR recordings, a delayed neural response and a weaker stimulus F0 encoding in premature neonates compared to neonates born at term was observed. No differences in the response time onset nor in stimulus F0 encoding were observed between the two groups of neonates born at term. No differences between the three groups were observed in the neural encoding of the stimulus temporal fine structure. Discussion These results highlight alterations in the neural encoding of speech sounds related to prematurity, which were present for the stimulus F0 but not for its temporal fine structure.
Collapse
Affiliation(s)
- Teresa Ribas-Prats
- Brainlab–Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| | - Sonia Arenillas-Alcón
- Brainlab–Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| | - Silvia Irene Ferrero Martínez
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
- BCNatal–Barcelona Center for Maternal Fetal and Neonatal Medicine (Hospital Sant Joan de Déu and Hospital Clínic), University of Barcelona, Barcelona, Spain
| | - Maria Dolores Gómez-Roig
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
- BCNatal–Barcelona Center for Maternal Fetal and Neonatal Medicine (Hospital Sant Joan de Déu and Hospital Clínic), University of Barcelona, Barcelona, Spain
| | - Carles Escera
- Brainlab–Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| |
Collapse
|
6
|
Ribas-Prats T, Cordero G, Lip-Sosa DL, Arenillas-Alcón S, Costa-Faidella J, Gómez-Roig MD, Escera C. Developmental Trajectory of the Frequency-Following Response During the First 6 Months of Life. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4785-4800. [PMID: 37944057 DOI: 10.1044/2023_jslhr-23-00104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
PURPOSE The aim of the present study is to characterize the maturational changes during the first 6 months of life in the neural encoding of two speech sound features relevant for early language acquisition: the stimulus fundamental frequency (fo), related to stimulus pitch, and the vowel formant composition, particularly F1. The frequency-following response (FFR) was used as a snapshot into the neural encoding of these two stimulus attributes. METHOD FFRs to a consonant-vowel stimulus /da/ were retrieved from electroencephalographic recordings in a sample of 80 healthy infants (45 at birth and 35 at the age of 1 month). Thirty-two infants (16 recorded at birth and 16 recorded at 1 month) returned for a second recording at 6 months of age. RESULTS Stimulus fo and F1 encoding showed improvements from birth to 6 months of age. Most remarkably, a significant improvement in the F1 neural encoding was observed during the first month of life. CONCLUSION Our results highlight the rapid and sustained maturation of the basic neural machinery necessary for the phoneme discrimination ability during the first 6 months of age.
Collapse
Affiliation(s)
- Teresa Ribas-Prats
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Barcelona, Spain
| | - Gaël Cordero
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Barcelona, Spain
| | - Diana Lucia Lip-Sosa
- Institut de Recerca Sant Joan de Déu, Barcelona, Spain
- BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Sant Joan de Déu and Hospital Clínic), University of Barcelona, Spain
| | - Sonia Arenillas-Alcón
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Barcelona, Spain
| | - Jordi Costa-Faidella
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Barcelona, Spain
| | - María Dolores Gómez-Roig
- Institut de Recerca Sant Joan de Déu, Barcelona, Spain
- BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Sant Joan de Déu and Hospital Clínic), University of Barcelona, Spain
| | - Carles Escera
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Barcelona, Spain
| |
Collapse
|
7
|
Li MM, Moberly AC, Tamati TN. Factors affecting talker discrimination ability in adult cochlear implant users. JOURNAL OF COMMUNICATION DISORDERS 2022; 99:106255. [PMID: 35988314 PMCID: PMC10659049 DOI: 10.1016/j.jcomdis.2022.106255] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 08/10/2022] [Accepted: 08/11/2022] [Indexed: 06/15/2023]
Abstract
INTRODUCTION Real-world speech communication involves interacting with many talkers with diverse voices and accents. Many adults with cochlear implants (CIs) demonstrate poor talker discrimination, which may contribute to real-world communication difficulties. However, the factors contributing to talker discrimination ability, and how discrimination ability relates to speech recognition outcomes in adult CI users are still unknown. The current study investigated talker discrimination ability in adult CI users, and the contributions of age, auditory sensitivity, and neurocognitive skills. In addition, the relation between talker discrimination ability and multiple-talker sentence recognition was explored. METHODS Fourteen post-lingually deaf adult CI users (3 female, 11 male) with ≥1 year of CI use completed a talker discrimination task. Participants listened to two monosyllabic English words, produced by the same talker or by two different talkers, and indicated if the words were produced by the same or different talkers. Nine female and nine male native English talkers were paired, resulting in same- and different-talker pairs as well as same-gender and mixed-gender pairs. Participants also completed measures of spectro-temporal processing, neurocognitive skills, and multiple-talker sentence recognition. RESULTS CI users showed poor same-gender talker discrimination, but relatively good mixed-gender talker discrimination. Older age and weaker neurocognitive skills, in particular inhibitory control, were associated with less accurate mixed-gender talker discrimination. Same-gender discrimination was significantly related to multiple-talker sentence recognition accuracy. CONCLUSION Adult CI users demonstrate overall poor talker discrimination ability. Individual differences in mixed-gender discrimination ability were related to age and neurocognitive skills, suggesting that these factors contribute to the ability to make use of available, degraded talker characteristics. Same-gender talker discrimination was associated with multiple-talker sentence recognition, suggesting that access to subtle talker-specific cues may be important for speech recognition in challenging listening conditions.
Collapse
Affiliation(s)
- Michael M Li
- The Ohio State University Wexner Medical Center, Department of Otolaryngology - Head & Neck Surgery, Columbus, OH, USA
| | - Aaron C Moberly
- The Ohio State University Wexner Medical Center, Department of Otolaryngology - Head & Neck Surgery, Columbus, OH, USA
| | - Terrin N Tamati
- The Ohio State University Wexner Medical Center, Department of Otolaryngology - Head & Neck Surgery, Columbus, OH, USA; University Medical Center Groningen, University of Groningen, Department of Otorhinolaryngology/Head and Neck Surgery, Groningen, the Netherlands.
| |
Collapse
|
8
|
Zhang Y, Chen J, Zhang Y, Sun B, Liu Y. Using Auditory Characteristics to Select Hearing Aid Compression Speeds for Presbycusic Patients. Front Aging Neurosci 2022; 14:869338. [PMID: 35847672 PMCID: PMC9285002 DOI: 10.3389/fnagi.2022.869338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 05/20/2022] [Indexed: 11/23/2022] Open
Abstract
Objectives This study aimed to select the optimal hearing aid compression speeds (fast-acting and slow-acting) for presbycusic patients by using auditory characteristics including temporal modulation and speech-in-noise performance. Methods In total, 24 patients with unilateral or bilateral moderate sensorineural hearing loss who scored higher than 21 on the Montreal Cognitive Assessment (MoCA) test participated in this study. The electrocochleogram (ECochG) results, including summating potentials (SP) and action potentials (AP), were recorded. Subjects' temporal modulation thresholds and speech recognition at 4 individualized signal-to-noise ratios were measured under three conditions, namely, unaided, aided with fast-acting compression (FAC), and aided with slow-acting compression (SAC). Results The results of this study showed that modulation discrimination thresholds in the unaided (−8.14 dB) and aided SAC (−8.19 dB) conditions were better than the modulation thresholds in the FAC (−4.67 dB) conditions. The speech recognition threshold (SRT75%) for FAC (5.21 dB) did not differ significantly from SAC (3.39 dB) (p = 0.12). A decision tree analysis showed that the inclusion of the AP, unaided modulation thresholds, and unaided SRT75% may correctly identify the optimal compression speeds (FAC vs. SAC) for individual presbycusic patients with up to 90% accuracy. Conclusion Both modes of compression speeds improved a presbycusic patient's speech recognition ability in noise. The SAC hearing aids may better preserve the modulation thresholds than the FAC hearing aids. The measurement of AP, along with the unaided modulation thresholds and unaided SRT75%, may help guide the selection of optimal compression speeds for individual presbycusic patients.
Collapse
Affiliation(s)
- Yi Zhang
- Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jing Chen
- School of Electronics Engineering and Computer Science, Peking University, Beijing, China
| | - Yanmei Zhang
- Department of Otorhinolaryngology Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Baoxuan Sun
- Widex Hearing Aid (Shanghai) Co., Ltd., Shanghai, China
| | - Yuhe Liu
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- *Correspondence: Yuhe Liu
| |
Collapse
|
9
|
Jahn KN, Arenberg JG, Horn DL. Spectral Resolution Development in Children With Normal Hearing and With Cochlear Implants: A Review of Behavioral Studies. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:1646-1658. [PMID: 35201848 PMCID: PMC9499384 DOI: 10.1044/2021_jslhr-21-00307] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 09/09/2021] [Accepted: 12/01/2021] [Indexed: 06/14/2023]
Abstract
PURPOSE This review article provides a theoretical overview of the development of spectral resolution in children with normal hearing (cNH) and in those who use cochlear implants (CIs), with an emphasis on methodological considerations. The aim was to identify key directions for future research on spectral resolution development in children with CIs. METHOD A comprehensive literature review was conducted to summarize and synthesize previously published behavioral research on spectral resolution development in normal and impaired auditory systems. CONCLUSIONS In cNH, performance on spectral resolution tasks continues to improve through the teenage years and is likely driven by gradual maturation of across-channel intensity resolution. A small but growing body of evidence from children with CIs suggests a more complex relationship between spectral resolution development, patient demographics, and the quality of the CI electrode-neuron interface. Future research should aim to distinguish between the effects of patient-specific variables and the underlying physiology on spectral resolution abilities in children of all ages who are hard of hearing and use auditory prostheses.
Collapse
Affiliation(s)
- Kelly N. Jahn
- Department of Speech, Language, and Hearing, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson
- Callier Center for Communication Disorders, The University of Texas at Dallas
| | - Julie G. Arenberg
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston
| | - David L. Horn
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle
- Division of Otolaryngology, Seattle Children's Hospital, WA
| |
Collapse
|
10
|
Cieśla K, Wolak T, Lorens A, Mentzel M, Skarżyński H, Amedi A. Effects of training and using an audio-tactile sensory substitution device on speech-in-noise understanding. Sci Rep 2022; 12:3206. [PMID: 35217676 PMCID: PMC8881456 DOI: 10.1038/s41598-022-06855-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Accepted: 01/28/2022] [Indexed: 11/09/2022] Open
Abstract
Understanding speech in background noise is challenging. Wearing face-masks, imposed by the COVID19-pandemics, makes it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on the fingertips. The vibrations correspond to low frequencies extracted from the speech input. We trained two groups of non-native English speakers in understanding distorted speech in noise. After a short session (30-45 min) of repeating sentences, with or without concurrent matching vibrations, we showed comparable mean group improvement of 14-16 dB in Speech Reception Threshold (SRT) in two test conditions, i.e., when the participants were asked to repeat sentences only from hearing and also when matching vibrations on fingertips were present. This is a very strong effect, if one considers that a 10 dB difference corresponds to doubling of the perceived loudness. The number of sentence repetitions needed for both types of training to complete the task was comparable. Meanwhile, the mean group SNR for the audio-tactile training (14.7 ± 8.7) was significantly lower (harder) than for the auditory training (23.9 ± 11.8), which indicates a potential facilitating effect of the added vibrations. In addition, both before and after training most of the participants (70-80%) showed better performance (by mean 4-6 dB) in speech-in-noise understanding when the audio sentences were accompanied with matching vibrations. This is the same magnitude of multisensory benefit that we reported, with no training at all, in our previous study using the same experimental procedures. After training, performance in this test condition was also best in both groups (SRT ~ 2 dB). The least significant effect of both training types was found in the third test condition, i.e. when participants were repeating sentences accompanied with non-matching tactile vibrations and the performance in this condition was also poorest after training. The results indicate that both types of training may remove some level of difficulty in sound perception, which might enable a more proper use of speech inputs delivered via vibrotactile stimulation. We discuss the implications of these novel findings with respect to basic science. In particular, we show that even in adulthood, i.e. long after the classical "critical periods" of development have passed, a new pairing between a certain computation (here, speech processing) and an atypical sensory modality (here, touch) can be established and trained, and that this process can be rapid and intuitive. We further present possible applications of our training program and the SSD for auditory rehabilitation in patients with hearing (and sight) deficits, as well as healthy individuals in suboptimal acoustic situations.
Collapse
Affiliation(s)
- K Cieśla
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel. .,World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland.
| | - T Wolak
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - A Lorens
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - M Mentzel
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel
| | - H Skarżyński
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - A Amedi
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
11
|
Bühling B, Maack S, Schweitzer T, Strangfeld C. Enhancing the spectral signatures of ultrasonic fluidic transducer pulses for improved time-of-flight measurements. ULTRASONICS 2022; 119:106612. [PMID: 34735931 DOI: 10.1016/j.ultras.2021.106612] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 09/15/2021] [Accepted: 10/08/2021] [Indexed: 06/13/2023]
Abstract
Air-coupled ultrasonic (ACU) testing has proven to be a valuable method for increasing the speed in non-destructive ultrasonic testing and the investigation of sensitive specimens. A major obstacle to implementing ACU methods is the significant signal power loss at the air-specimen and transducer-air interfaces. The loss between transducer and air can be eliminated by using recently developed fluidic transducers. These transducers use pressurized air and a natural flow instability to generate high sound power signals. Due to this self-excited flow instability, the individual pulses are dissimilar in length, amplitude, and phase. These amplitude and angle modulated pulses offer the great opportunity to further increase the signal-to-noise ratio with pulse compression methods. In practice, multi-input multi-output (MIMO) setups reduce the time required to scan the specimen surface, but demand high pulse discriminability. By applying envelope removal techniques to the individual pulses, the pulse discriminability is increased allowing only the remaining phase information to be targeted for analysis. Finally, semi-synthetic experiments are presented to verify the applicability of the envelope removal method and highlight the suitability of the fluidic transducer for MIMO setups.
Collapse
Affiliation(s)
- Benjamin Bühling
- Bundesanstalt für Materialforschung und -prüfung (BAM), Unter den Eichen 87, 12205, Berlin, Germany.
| | - Stefan Maack
- Bundesanstalt für Materialforschung und -prüfung (BAM), Unter den Eichen 87, 12205, Berlin, Germany
| | | | - Christoph Strangfeld
- Bundesanstalt für Materialforschung und -prüfung (BAM), Unter den Eichen 87, 12205, Berlin, Germany
| |
Collapse
|
12
|
Wang X, Zhang Y, Bai S, Qi R, Sun H, Li R, Zhu L, Cao X, Jia G, Li X, Gao L. Corticofugal Modulation of Temporal and Rate Representations in the Inferior Colliculus of the Awake Marmoset. Cereb Cortex 2022; 32:4080-4097. [PMID: 35029654 DOI: 10.1093/cercor/bhab467] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 10/12/2021] [Accepted: 11/16/2021] [Indexed: 11/14/2022] Open
Abstract
Temporal processing is crucial for auditory perception and cognition, especially for communication sounds. Previous studies have shown that the auditory cortex and the thalamus use temporal and rate representations to encode slowly and rapidly changing time-varying sounds. However, how the primate inferior colliculus (IC) encodes time-varying sounds at the millisecond scale remains unclear. In this study, we investigated the temporal processing by IC neurons in awake marmosets to Gaussian click trains with varying interclick intervals (2-100 ms). Strikingly, we found that 28% of IC neurons exhibited rate representation with nonsynchronized responses, which is in sharp contrast to the current view that the IC only uses a temporal representation to encode time-varying signals. Moreover, IC neurons with rate representation exhibited response properties distinct from those with temporal representation. We further demonstrated that reversible inactivation of the primary auditory cortex modulated 17% of the stimulus-synchronized responses and 21% of the nonsynchronized responses of IC neurons, revealing that cortico-colliculus projections play a role, but not a crucial one, in temporal processing in the IC. This study has significantly advanced our understanding of temporal processing in the IC of awake animals and provides new insights into temporal processing from the midbrain to the cortex.
Collapse
Affiliation(s)
- Xiaohui Wang
- Department of Neurology of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, School of Medicine, Zhejiang University, Hangzhou 310000, China
| | - Yuanqing Zhang
- Department of Neurology of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, School of Medicine, Zhejiang University, Hangzhou 310000, China
| | - Siyi Bai
- Department of Neurology of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, School of Medicine, Zhejiang University, Hangzhou 310000, China
| | - Runze Qi
- Department of Neurology of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, School of Medicine, Zhejiang University, Hangzhou 310000, China
| | - Hao Sun
- Department of Neurology of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, School of Medicine, Zhejiang University, Hangzhou 310000, China
| | - Rui Li
- Department of Neurology of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, School of Medicine, Zhejiang University, Hangzhou 310000, China
| | - Lin Zhu
- Department of Neurology of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, School of Medicine, Zhejiang University, Hangzhou 310000, China
| | - Xinyuan Cao
- Department of Neurology of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, School of Medicine, Zhejiang University, Hangzhou 310000, China
| | - Guoqiang Jia
- Department of Neurology of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, School of Medicine, Zhejiang University, Hangzhou 310000, China
| | - Xinjian Li
- Department of Neurology of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, School of Medicine, Zhejiang University, Hangzhou 310000, China
| | - Lixia Gao
- Department of Neurology of the Second Affiliated Hospital, Interdisciplinary Institute of Neuroscience and Technology, College of Biomedical Engineering and Instrument Science, School of Medicine, Zhejiang University, Hangzhou 310000, China
| |
Collapse
|
13
|
Abstract
Human speech perception results from neural computations that transform external acoustic speech signals into internal representations of words. The superior temporal gyrus (STG) contains the nonprimary auditory cortex and is a critical locus for phonological processing. Here, we describe how speech sound representation in the STG relies on fundamentally nonlinear and dynamical processes, such as categorization, normalization, contextual restoration, and the extraction of temporal structure. A spatial mosaic of local cortical sites on the STG exhibits complex auditory encoding for distinct acoustic-phonetic and prosodic features. We propose that as a population ensemble, these distributed patterns of neural activity give rise to abstract, higher-order phonemic and syllabic representations that support speech perception. This review presents a multi-scale, recurrent model of phonological processing in the STG, highlighting the critical interface between auditory and language systems.
Collapse
Affiliation(s)
- Ilina Bhaya-Grossman
- Department of Neurological Surgery, University of California, San Francisco, California 94143, USA;
- Joint Graduate Program in Bioengineering, University of California, Berkeley and San Francisco, California 94720, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, California 94143, USA;
| |
Collapse
|
14
|
Mo J, Jiam NT, Deroche MLD, Jiradejvong P, Limb CJ. Effect of Frequency Response Manipulations on Musical Sound Quality for Cochlear Implant Users. Trends Hear 2022; 26:23312165221120017. [PMID: 35983700 PMCID: PMC9393940 DOI: 10.1177/23312165221120017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Cochlear implant (CI) users commonly report degraded musical sound quality. To improve CI-mediated music perception and enjoyment, we must understand factors that affect sound quality. In the present study, we utilize frequency response manipulation (FRM), a process that adjusts the energies of frequency bands within an audio signal, to determine its impact on CI-user sound quality assessments of musical stimuli. Thirty-three adult CI users completed an online study and listened to FRM-altered clips derived from the top songs in Billboard magazine. Participants assessed sound quality using the MUltiple Stimulus with Hidden Reference and Anchor for CI users (CI-MUSHRA) rating scale. FRM affected sound quality ratings (SQR). Specifically, increasing the gain for low and mid-range frequencies led to higher quality ratings than reducing them. In contrast, manipulating the gain for high frequencies (those above 2 kHz) had no impact. Participants with musical training were more sensitive to FRM than non-musically trained participants and demonstrated preference for gain increases over reductions. These findings suggest that, even among CI users, past musical training provides listeners with subtleties in musical appraisal, even though their hearing is now mediated electrically and bears little resemblance to their musical experience prior to implantation. Increased gain below 2 kHz may lead to higher sound quality than for equivalent reductions, perhaps because it offers greater access to lyrics in songs or because it provides more salient beat sensations.
Collapse
Affiliation(s)
- Jonathan Mo
- Davis School of Medicine, 8785University of California, Sacramento, CA, USA
| | - Nicole T Jiam
- Department of Otolaryngology-Head and Neck Surgery, San Francisco School of Medicine, University of California, San Francisco, CA, USA
| | | | - Patpong Jiradejvong
- Department of Otolaryngology-Head and Neck Surgery, San Francisco School of Medicine, University of California, San Francisco, CA, USA
| | - Charles J Limb
- Department of Otolaryngology-Head and Neck Surgery, San Francisco School of Medicine, University of California, San Francisco, CA, USA
| |
Collapse
|
15
|
The effect of harmonic training on speech perception in noise in hearing-impaired children. Int J Pediatr Otorhinolaryngol 2021; 149:110845. [PMID: 34293627 DOI: 10.1016/j.ijporl.2021.110845] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/16/2021] [Accepted: 07/16/2021] [Indexed: 11/21/2022]
Abstract
OBJECTIVE Speech perception in noise is a highly challenging situation experienced by hearing-impaired children (HIC). Despite advances in hearing aid technologies, speech perception in noise still poses challenges. Pitch-based training improves pitch discrimination and speech perception and may facilitate concurrent sound segregation. Considering the role of harmonics in the analysis of concurrent sounds, we performed a harmonic assessment, examined the role of harmonic training in the rehabilitation of moderate-to-severe HIC, and investigated its effect on their speech perception in noise. METHODS The participants were 57 normally hearing children (NHC) with a mean age of 7.73 ± 1.57 years and 18 HIC with a mean age of 7.94 ± 1.47 years. The two groups were compared in terms of harmonic assessment, the Pitch Pattern Sequence Test (PPST), the Consonant-Vowel in Noise (CV in noise) test, and the Bamford-Kowal Bench (BKB) test. Subsequently, the HIC underwent harmonic training, and the results of the pre- and post-harmonic training assessments were compared. RESULTS HIC displayed poorer harmonic discrimination than NHC at all harmonics (P < 0.05). They also showed lower scores in PPST, CV in noise, and BKB tests compared to NHC (P < 0.05). Harmonic training led to HIC's better performance in harmonic assessment, PPST, and CV in noise test (P < 0.05). However, the BKB test results pre- and post-training did not significantly differ (P > 0.05). CONCLUSION Harmonic training plays a significant role in improving the HIC's temporal processing of the PPST and CV in noise test; therefore, it can serve as a rehabilitation method to enhance temporal processing and auditory scene analysis.
Collapse
|
16
|
TÜRK Ç, KÖSEOĞLU A, ZEREN S. İşitme Kayıplı Bireylerde Müzik Algısı. İSTANBUL GELIŞIM ÜNIVERSITESI SAĞLIK BILIMLERI DERGISI 2021. [DOI: 10.38079/igusabder.947027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
|
17
|
Johnson KC, Xie Z, Shader MJ, Mayo PG, Goupell MJ. Effect of Chronological Age on Pulse Rate Discrimination in Adult Cochlear-Implant Users. Trends Hear 2021; 25:23312165211007367. [PMID: 34028313 PMCID: PMC8150454 DOI: 10.1177/23312165211007367] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Cochlear-implant (CI) users rely heavily on temporal envelope cues to understand speech. Temporal processing abilities may decline with advancing age in adult CI users. This study investigated the effect of age on the ability to discriminate changes in pulse rate. Twenty CI users aged 23 to 80 years participated in a rate discrimination task. They attempted to discriminate a 35% rate increase from baseline rates of 100, 200, 300, 400, or 500 pulses per second. The stimuli were electrical pulse trains delivered to a single electrode via direct stimulation to an apical (Electrode 20), a middle (Electrode 12), or a basal location (Electrode 4). Electrically evoked compound action potential amplitude growth functions were recorded at each of those electrodes as an estimate of peripheral neural survival. Results showed that temporal pulse rate discrimination performance declined with advancing age at higher stimulation rates (e.g., 500 pulses per second) when compared with lower rates. The age-related changes in temporal pulse rate discrimination at higher stimulation rates persisted after statistical analysis to account for the estimated peripheral contributions from electrically evoked compound action potential amplitude growth functions. These results indicate the potential contributions of central factors to the limitations in temporal pulse rate discrimination ability associated with aging in CI users.
Collapse
Affiliation(s)
- Kelly C Johnson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, United States
| | - Zilong Xie
- Department of Hearing and Speech, University of Kansas Medical Center, Kansas City, United States
| | - Maureen J Shader
- Department of Hearing and Speech Sciences, University of Maryland, College Park, United States.,Bionics Institute, Melbourne, Australia.,Department of Medical Bionics, The University of Melbourne, Melbourne, Australia
| | - Paul G Mayo
- Department of Hearing and Speech Sciences, University of Maryland, College Park, United States
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, United States
| |
Collapse
|
18
|
Ab Shukor NF, Han W, Lee J, Seo YJ. Crucial Music Components Needed for Speech Perception Enhancement of Pediatric Cochlear Implant Users: A Systematic Review and Meta-Analysis. Audiol Neurootol 2021; 26:389-413. [PMID: 33878756 DOI: 10.1159/000515136] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 02/08/2021] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Although many clinicians have attempted music training for the hearing-impaired children, no specific effects have yet been reported for individual music components. This paper seeks to discover specific music components that help in improving speech perception of children with cochlear implants (CI) and to identify the effective training periods and methods needed for each component. METHOD While assessing 5 electronic databases, that is, ScienceDirect, Scopus, PubMed, CINAHL, and Web of Science, 1,638 articles were found initially. After the screening and eligibility assessment stage based on the Participants, Intervention, Comparisons, Outcome, and Study Design (PICOS) inclusion criteria, 18 of 1,449 articles were chosen. RESULTS A total of 18 studies and 14 studies (209 participants) were analyzed using a systematic review and meta-analysis, respectively. No publication bias was detected based on an Egger's regression result even though the funnel plot was asymmetrical. The results of the meta-analysis revealed that the largest improvement was seen for rhythm perception, followed by the perception of pitch and harmony and smallest for timbre perception after the music training. The duration of training affected the rhythm, pitch, and harmony perception but not the timbre. Interestingly, musical activities, such as singing, produced the biggest effect size, implying that children with CI obtained the greatest benefits of music training by singing, followed by playing an instrument and achieved the smallest effect by only listening to musical stimuli. Significant improvement in pitch perception helped with the enhancement of prosody perception. CONCLUSION Music training can improve the music perception of children with CI and enhance their speech prosody. Long training duration was shown to provide the largest training effect of the children's perception improvement. The children with CI learned rhythm and pitch better than they did with harmony and timbre. These results support the finding of past studies that with music training, both rhythm and pitch perception can be improved, and it also helps in the development of prosody perception.
Collapse
Affiliation(s)
- Nor Farawaheeda Ab Shukor
- Laboratory of Hearing and Technology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon, Republic of Korea.,Division of Speech Pathology and Audiology, College of Natural Sciences, Hallym University, Chuncheon, Republic of Korea
| | - Woojae Han
- Laboratory of Hearing and Technology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon, Republic of Korea.,Division of Speech Pathology and Audiology, College of Natural Sciences, Hallym University, Chuncheon, Republic of Korea
| | - Jihyeon Lee
- Laboratory of Hearing and Technology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon, Republic of Korea.,Research Institute of Hearing Enhancement, Yonsei University Wonju College of Medicine, Wonju, Republic of Korea
| | - Young Joon Seo
- Research Institute of Hearing Enhancement, Yonsei University Wonju College of Medicine, Wonju, Republic of Korea.,Department of Otorhinolaryngology, Yonsei University Wonju College of Medicine, Wonju, Republic of Korea
| |
Collapse
|
19
|
Bourke JD, Todd J. Acoustics versus linguistics? Context is Part and Parcel to lateralized processing of the parts and parcels of speech. Laterality 2021; 26:725-765. [PMID: 33726624 DOI: 10.1080/1357650x.2021.1898415] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The purpose of this review is to provide an accessible exploration of key considerations of lateralization in speech and non-speech perception using clear and defined language. From these considerations, the primary arguments for each side of the linguistics versus acoustics debate are outlined and explored in context of emerging integrative theories. This theoretical approach entails a perspective that linguistic and acoustic features differentially contribute to leftward bias, depending on the given context. Such contextual factors include stimulus parameters and variables of stimulus presentation (e.g., noise/silence and monaural/binaural) and variances in individuals (sex, handedness, age, and behavioural ability). Discussion of these factors and their interaction is also aimed towards providing an outline of variables that require consideration when developing and reviewing methodology of acoustic and linguistic processing laterality studies. Thus, there are three primary aims in the present paper: (1) to provide the reader with key theoretical perspectives from the acoustics/linguistics debate and a synthesis of the two viewpoints, (2) to highlight key caveats for generalizing findings regarding predominant models of speech laterality, and (3) to provide a practical guide for methodological control using predominant behavioural measures (i.e., gap detection and dichotic listening tasks) and/or neurophysiological measures (i.e., mismatch negativity) of speech laterality.
Collapse
Affiliation(s)
- Jesse D Bourke
- School of Psychology, University Drive, Callaghan, NSW 2308, Australia
| | - Juanita Todd
- School of Psychology, University Drive, Callaghan, NSW 2308, Australia
| |
Collapse
|
20
|
Abstract
Signal processing algorithms are the hidden components in the audio processor that converts the received acoustic signal into electrical impulses while maintaining as much relevant information as possible. Signal processing algorithms should be smart enough to mimic the functionality of external, middle and the inner-ear to provide the cochlear implant (CI) user with a hearing experience as natural as possible. Modern sound processing strategies are based on the continuous interleaved sampling (CIS) strategy proposed by B. Wilson in 1991, which provided envelope information over several intracochlear electrodes. The CIS strategy brought significant gains in speech perception. Translational research activities of MED-EL resulted in further improvements in speech understanding in noisy environments as well as enjoyment of music by not only coding CIS-based envelope information, but by also representing temporal fine structure information in the stimulation patterns of the apical channels. Further developments include "complete cochlear coverage" made possible by deep insertion of the intracochlear electrode, elaborate front end processing, anatomy based fitting (ABF), triphasic pulse stimulation instrumental in the suppression of facial nerve stimulation, and bimodal delay compensation allowing unilateral CI users to experience hearing with hearing aids on the contralateral ear. The large number of hardware developments might be exemplified by the RONDO, the world's first single unit audio processor in 2013. This article covers the milestones of translational research around the signal processing and audio processor topic that took place in association with MED-EL.
Collapse
Affiliation(s)
| | - Ingeborg Hochmair
- MED-EL Elektromedizinische Geraete Gesellschaft m.b.H., Innsbruck, Austria
| |
Collapse
|
21
|
Müller V, Klünter HD, Fürstenberg D, Walger M, Lang-Roth R. Comparison of the Effects of Two Cochlear Implant Fine Structure Coding Strategies on Speech Perception. Am J Audiol 2020; 29:226-235. [PMID: 32464082 DOI: 10.1044/2020_aja-19-00110] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose This study aims to investigate the effect of upgrading from the fine structure processing (FSP) coding strategy to the novel fine structure strategy "FS4" in adults in adults with cochlear implants manufactured by MED-EL GmbH (Innsbruck, Austria). Method A crossover, double-blinded study was conducted for 12 weeks. Twelve adult participants were randomly assigned to two groups. During the first 6-week test interval, one group continued to use their everyday FSP strategy, whereas the other group was upgraded to the FS4 strategy. In the second 6-week interval, the two groups switched coding strategies. Speech perception was measured at the end of each test interval with the Oldenburg Sentence Test and the Göttingen Sentence Test. Participants completed the Speech, Spatial and Qualities of Hearing Scale at the end of each test interval and a simple preference test at the end of the study. Results There was no significant difference in speech perception test results obtained with the Oldenburg Sentence Test and the Göttingen Sentence Test, neither in quiet nor in noise. Participants' Speech, Spatial and Qualities of Hearing Scale self-evaluation and preference test results showed that the two coding strategies had similar effects on their hearing perception. No clear preference for either of the strategies was found. Conclusions Speech perception test results and the participants' level of satisfaction were similar for the two FS coding strategies. Despite differences in the presentation of temporal fine structure between FSP and FS4, a clear benefit of the newer FS4 strategy could not be shown.
Collapse
Affiliation(s)
- Verena Müller
- Department of Otorhinolaryngology, Head and Neck Surgery and Cochlear Implant Center, Faculty of Medicine, University of Cologne, Germany
| | - Heinz Dieter Klünter
- Department of Otorhinolaryngology, Head and Neck Surgery and Cochlear Implant Center, Faculty of Medicine, University of Cologne, Germany
| | - Dirk Fürstenberg
- Department of Otorhinolaryngology, Head and Neck Surgery and Cochlear Implant Center, Faculty of Medicine, University of Cologne, Germany
| | - Martin Walger
- Department of Otorhinolaryngology, Head and Neck Surgery and Cochlear Implant Center, Faculty of Medicine, University of Cologne, Germany
| | - Ruth Lang-Roth
- Department of Otorhinolaryngology, Head and Neck Surgery and Cochlear Implant Center, Faculty of Medicine, University of Cologne, Germany
| |
Collapse
|
22
|
Liepins R, Kaider A, Honeder C, Auinger AB, Dahm V, Riss D, Arnoldner C. Formant frequency discrimination with a fine structure sound coding strategy for cochlear implants. Hear Res 2020; 392:107970. [PMID: 32339775 DOI: 10.1016/j.heares.2020.107970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 03/04/2020] [Accepted: 04/05/2020] [Indexed: 11/16/2022]
Abstract
Recent sound coding strategies for cochlear implants (CI) have focused on the transmission of temporal fine structure to the CI recipient. To date, knowledge about the effects of fine structure coding in electrical hearing is poorly charactarized. The aim of this study was to examine whether the presence of temporal fine structure coding affects how the CI recipient perceives sound. This was done by comparing two sound coding strategies with different temporal fine structure coverage in a longitudinal cross-over setting. The more recent FS4 coding strategy provides fine structure coding on typically four apical stimulation channels compared to FSP with usually one or two fine structure channels. 34 adult CI patients with a minimum CI experience of one year were included. All subjects were fitted according to clinical routine and used both coding strategies for three months in a randomized sequence. Formant frequency discrimination thresholds (FFDT) were measured to assess the ability to resolve timbre information. Further outcome measures included a monosyllables test in quiet and the speech reception threshold of an adaptive matrix sentence test in noise (Oldenburger sentence test). In addition, the subjective sound quality was assessed using visual analogue scales and a sound quality questionnaire after each three months period. The extended fine structure range of FS4 yields FFDT similar to FSP for formants occurring in the frequency range only covered by FS4. There is a significant interaction (p = 0.048) between the extent of fine structure coverage in FSP and the improvement in FFDT in favour of FS4 for these stimuli. FS4 Speech perception in noise and quiet was similar with both coding strategies. Sound quality was rated heterogeneously showing that both strategies represent valuable options for CI fitting to allow for best possible individual optimization.
Collapse
Affiliation(s)
- R Liepins
- Medical University of Vienna, Department of Otolaryngology, Head and Neck Surgery, Vienna, Austria
| | - A Kaider
- Medical University of Vienna, Center for Medical Statistics, Informatics, and Intelligent Systems, Vienna, Austria
| | - C Honeder
- Medical University of Vienna, Department of Otolaryngology, Head and Neck Surgery, Vienna, Austria
| | - A B Auinger
- Medical University of Vienna, Department of Otolaryngology, Head and Neck Surgery, Vienna, Austria
| | - V Dahm
- Medical University of Vienna, Department of Otolaryngology, Head and Neck Surgery, Vienna, Austria
| | - D Riss
- Medical University of Vienna, Department of Otolaryngology, Head and Neck Surgery, Vienna, Austria.
| | - C Arnoldner
- Medical University of Vienna, Department of Otolaryngology, Head and Neck Surgery, Vienna, Austria
| |
Collapse
|
23
|
Ismaail NM, Shalaby AA, Ibraheem OA. Effect of age on Gaps-In-Noise test in pediatric population. Int J Pediatr Otorhinolaryngol 2019; 122:155-160. [PMID: 31029950 DOI: 10.1016/j.ijporl.2019.04.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Revised: 04/10/2019] [Accepted: 04/10/2019] [Indexed: 10/27/2022]
Abstract
OBJECTIVES The main objective was to examine the effect of central maturation on the auditory temporal resolution in a group of school-age children using Gaps-In-Noise test. METHODS The study involved 180 children (6-16 years) with normal hearing, average intelligence and language skills, and adequate scholastic achievement. Subjects were divided into four age subgroups. Investigations involved basic audiological evaluation, screening test battery for central auditory processing, and finally Gaps-In-Noise test. RESULTS Comparison of the four age subgroups revealed non-significant age effect on the Gaps-In-Noise test. The approximate gap detection threshold of children was comparable to that of adults. Equivalent data were obtained as a function of the ear, gender, list, and retest. CONCLUSION Central auditory maturation of the temporal resolution and hence the Gaps-In-Noise test has been established by age 5 years. Consequently, assessment of Gaps-In-Noise test in school-age children provided adult-like normative data. The stability of outcomes across different factors highlights the clinical validity of Gaps-In-Noise test in the assessment of temporal resolution deficit and follow-up after remediation.
Collapse
Affiliation(s)
- Naema M Ismaail
- Audio-vestibular Medicine Unit, Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine (Girls), University of AL-Azhar, Cairo, Egypt
| | - Amany A Shalaby
- Audio-vestibular Medicine Unit, Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine, University of Ain Shams, Cairo, Egypt
| | - Ola A Ibraheem
- Audio-vestibular Medicine Unit, Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine, University of Zagazig, Zagazig, Egypt.
| |
Collapse
|
24
|
Cieśla K, Wolak T, Lorens A, Heimler B, Skarżyński H, Amedi A. Immediate improvement of speech-in-noise perception through multisensory stimulation via an auditory to tactile sensory substitution. Restor Neurol Neurosci 2019; 37:155-166. [PMID: 31006700 PMCID: PMC6598101 DOI: 10.3233/rnn-190898] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
BACKGROUND Hearing loss is becoming a real social and health problem. Its prevalence in the elderly is an epidemic. The risk of developing hearing loss is also growing among younger people. If left untreated, hearing loss can perpetuate development of neurodegenerative diseases, including dementia. Despite recent advancements in hearing aid (HA) and cochlear implant (CI) technologies, hearing impaired users still encounter significant practical and social challenges, with or without aids. In particular, they all struggle with understanding speech in challenging acoustic environments, especially in presence of a competing speaker. OBJECTIVES In the current proof-of-concept study we tested whether multisensory stimulation, pairing audition and a minimal-size touch device would improve intelligibility of speech in noise. METHODS To this aim we developed an audio-to-tactile sensory substitution device (SSD) transforming low-frequency speech signals into tactile vibrations delivered on two finger tips. Based on the inverse effectiveness law, i.e., multisensory enhancement is strongest when signal-to-noise ratio is lowest between senses, we embedded non-native language stimuli in speech-like noise and paired it with a low-frequency input conveyed through touch. RESULTS We found immediate and robust improvement in speech recognition (i.e. in the Signal-To-Noise-ratio) in the multisensory condition without any training, at a group level as well as in every participant. The reported improvement at the group-level of 6 dB was indeed major considering that an increase of 10 dB represents a doubling of the perceived loudness. CONCLUSIONS These results are especially relevant when compared to previous SSD studies showing effects in behavior only after a demanding cognitive training. We discuss the implications of our results for development of SSDs and of specific rehabilitation programs for the hearing impaired either using or not using HAs or CIs. We also discuss the potential application of such a set-up for sense augmentation, such as when learning a new language.
Collapse
Affiliation(s)
- Katarzyna Cieśla
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Tomasz Wolak
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
| | - Artur Lorens
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
| | - Benedetta Heimler
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Henryk Skarżyński
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
| | - Amir Amedi
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
25
|
Lotfi Y, Ahmadi T, Moossavi A, Bakhshi E. Binaural sensitivity to temporal fine structure and lateralization ability in children with suspected (central) auditory processing disorder. Auris Nasus Larynx 2018; 46:64-69. [PMID: 29954636 DOI: 10.1016/j.anl.2018.06.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2018] [Revised: 06/11/2018] [Accepted: 06/17/2018] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Previous studies have shown that a subgroup of children with suspected (central) auditory processing disorder (SusCAPD) have insufficient ability to use binaural cues to benefit from spatial processing. Thus, they experience considerable listening difficulties in challenging auditory environments, such as classrooms. Some researchers have also indicated the probable role of binaural temporal fine structure (TFS) in the perceptual segregation of target signal from noise and hence in speech perception in noise. Therefore, in the present study, in order to further investigate the underlying reason for listening problems against background noise in this group of children, their performance was measured using binaural TFS sensitivity test (TFS-LF) as well as behavioral auditory lateralization in noise test, both of which are based on binaural temporal cues processing. METHODS Participants in this analytical study included 91 children with normal hearing and no listening problems and 41 children (9-12 years old) with SusCAPD who found it challenging to understand speech in noise. Initially, the ability to use binaural TFS was measured at three frequencies (250, 500 and 750Hz) in both the groups, and the results of preliminary evaluations were compared between normal children and those with SusCAPD who participated in the study. Thereafter, the binaural performance of the 16 children with SusCAPD who had higher thresholds than the normal group at all three frequencies tested in TFS-LF test was examined using the lateralization test in 7 spatial locations. RESULTS Total 16 of the 41 children with SusCAPD who participated in this study (39%) showed poor performance on the TFS-LF test at all three frequencies, compared to both normal children and other children in the APD group (p<0.05). Furthermore, children in the APD group with binaural TFS coding deficits at all three frequencies revealed significant differences in the lateralization test results compared to normal children (p<0.05). CONCLUSION Findings of the current study demonstrated that one of the underlying causes for the difficulty understanding speech in noisy environments experienced by a subgroup of children with SusCAPD can be the reduced ability to benefit from binaural TFS information. This study also showed that a reduced ability to use binaural TFS cues in the group of children with SusCAPD was accompanied by reduced binaural processing abilities in the lateralization test which also admit the presence of binaural temporal processing deficits in this group of children.
Collapse
Affiliation(s)
- Yones Lotfi
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Tayebeh Ahmadi
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran.
| | - Abdollah Moossavi
- Department of Otolaryngology, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Enayatollah Bakhshi
- Department of Statistics, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| |
Collapse
|
26
|
Take-Home Trial Comparing Fast Fourier Transformation-Based and Filter Bank-Based Cochlear Implant Speech Coding Strategies. BIOMED RESEARCH INTERNATIONAL 2017; 2017:7915042. [PMID: 29057265 PMCID: PMC5615984 DOI: 10.1155/2017/7915042] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Revised: 07/11/2017] [Accepted: 08/07/2017] [Indexed: 11/18/2022]
Abstract
Previous studies have demonstrated no improved or deteriorated speech intelligibility with the HiResolution Fidelity 120™ speech coding strategy (HiResF120) over the original HiRes strategy. Improved spectral and deteriorated temporal sensitivities have been shown, making it plausible that the beneficial effect in the spectral domain was offset by the worsened temporal sensitivity. We hypothesize that the implementation of fast Fourier transform (FFT) processing, instead of the traditionally used bandpass filters, explains the reduction of temporal sensitivity. In this study, spectral ripple discrimination, temporal modulation detection, and speech intelligibility in noise were assessed in a two-week take-home trial with 3 speech coding strategies: one with conventional bandpass filters (HiRes), one with FFT-based filters (HiRes FFT), and one with FFT-based filters and current steering (HiRes Optima). One participant dropped out due to discomfort with both research programs. The 10 remaining participants performed equally well on all tasks with all three speech coding strategies, implying that FFT processing does not change the ability of CI recipients to discriminate spectral or temporal information or speech understanding.
Collapse
|
27
|
Li F, Bunta F, Tomblin JB. Alveolar and Postalveolar Voiceless Fricative and Affricate Productions of Spanish-English Bilingual Children With Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2427-2441. [PMID: 28800372 PMCID: PMC5831615 DOI: 10.1044/2017_jslhr-s-16-0125] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Revised: 12/19/2016] [Accepted: 04/04/2017] [Indexed: 05/18/2023]
Abstract
PURPOSE This study investigates the production of voiceless alveolar and postalveolar fricatives and affricates by bilingual and monolingual children with hearing loss who use cochlear implants (CIs) and their peers with normal hearing (NH). METHOD Fifty-four children participated in our study, including 12 Spanish-English bilingual CI users (M = 6;0 [years;months]), 12 monolingual English-speaking children with CIs (M = 6;1), 20 bilingual children with NH (M = 6;5), and 10 monolingual English-speaking children with NH (M = 5;10). Picture elicitation targeting /s/, /tʃ/, and /ʃ/ was administered. Repeated-measures analyses of variance comparing group means for frication duration, rise time, and centroid frequency were conducted for the effects of CI use and bilingualism. RESULTS All groups distinguished the target sounds in the 3 acoustic parameters examined. Regarding frication duration and rise time, the Spanish productions of bilingual children with CIs differed from their bilingual peers with NH. English frication duration patterns for bilingual versus monolingual CI users also differed. Centroid frequency was a stronger place cue for children with NH than for children with CIs. CONCLUSION Patterns of fricative and affricate production display effects of bilingualism and diminished signal, yielding unique patterns for bilingual and monolingual CI users.
Collapse
Affiliation(s)
- Fangfang Li
- Department of Psychology, University of Lethbridge, Alberta, Canada
| | - Ferenc Bunta
- Department of Communication Sciences and Disorders, University of Houston, TX
| | - J. Bruce Tomblin
- Department of Communication Sciences and Disorders, The University of Iowa, Iowa City
| |
Collapse
|
28
|
Mauch Biomed H, Boyd P. ELECTRO-ACOUSTIC STIMULATION - AN OPTION WHEN HEARING AIDS ARE NOT ENOUGH. REVISTA MÉDICA CLÍNICA LAS CONDES 2016. [DOI: 10.1016/j.rmclc.2016.11.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
|
29
|
Mauch H, Boyd P. TRADUCCIÓN ESTIMULACIÓN ELECTRO-ACÚSTICA UNA OPCIÓN CUANDO LOS AUDÍFONOS NO SON SUFICIENTE. REVISTA MÉDICA CLÍNICA LAS CONDES 2016. [DOI: 10.1016/j.rmclc.2016.11.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
|
30
|
Carlile S, Leung J. The Perception of Auditory Motion. Trends Hear 2016; 20:2331216516644254. [PMID: 27094029 PMCID: PMC4871213 DOI: 10.1177/2331216516644254] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2015] [Revised: 03/22/2016] [Accepted: 03/22/2016] [Indexed: 11/16/2022] Open
Abstract
The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception.
Collapse
Affiliation(s)
- Simon Carlile
- School of Medical Sciences, University of Sydney, NSW, Australia Starkey Hearing Research Center, Berkeley, CA, USA
| | - Johahn Leung
- School of Medical Sciences, University of Sydney, NSW, Australia
| |
Collapse
|