1
|
Borjigin A, Bharadwaj HM. Individual differences elucidate the perceptual benefits associated with robust temporal fine-structure processing. Proc Natl Acad Sci U S A 2025; 122:e2317152121. [PMID: 39752517 PMCID: PMC11725926 DOI: 10.1073/pnas.2317152121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 11/19/2024] [Indexed: 01/15/2025] Open
Abstract
The auditory system is unique among sensory systems in its ability to phase lock to and precisely follow very fast cycle-by-cycle fluctuations in the phase of sound-driven cochlear vibrations. Yet, the perceptual role of this temporal fine structure (TFS) code is debated. This fundamental gap is attributable to our inability to experimentally manipulate TFS cues without altering other perceptually relevant cues. Here, we circumnavigated this limitation by leveraging individual differences across 200 participants to systematically compare variations in TFS sensitivity to performance in a range of speech perception tasks. TFS sensitivity was assessed through detection of interaural time/phase differences, while speech perception was evaluated by word identification under noise interference. Results suggest that greater TFS sensitivity is not associated with greater masking release from fundamental-frequency or spatial cues but appears to contribute to resilience against the effects of reverberation. We also found that greater TFS sensitivity is associated with faster response times, indicating reduced listening effort. These findings highlight the perceptual significance of TFS coding for everyday hearing.
Collapse
Affiliation(s)
- Agudemu Borjigin
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN47907
- Waisman Center, University of Wisconsin, Madison, WI53705
| | - Hari M. Bharadwaj
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA15213
| |
Collapse
|
2
|
Lad M, Taylor JP, Griffiths TD. Reliable Web-Based Auditory Cognitive Testing: Observational Study. J Med Internet Res 2024; 26:e58444. [PMID: 39652871 PMCID: PMC11667740 DOI: 10.2196/58444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2024] [Revised: 06/26/2024] [Accepted: 10/10/2024] [Indexed: 12/12/2024] Open
Abstract
BACKGROUND Web-based experimentation, accelerated by the COVID-19 pandemic, has enabled large-scale participant recruitment and data collection. Auditory testing on the web has shown promise but faces challenges such as uncontrolled environments and verifying headphone use. Prior studies have successfully replicated auditory experiments but often involved younger participants, limiting the generalizability to older adults with varying hearing abilities. This study explores the feasibility of conducting reliable auditory cognitive testing using a web-based platform, especially among older adults. OBJECTIVE This study aims to determine whether demographic factors such as age and hearing status influence participation in web-based auditory cognitive experiments and to assess the reproducibility of auditory cognitive measures-specifically speech-in-noise perception and auditory memory (AuM)-between in-person and web-based settings. Additionally, this study aims to examine the relationship between musical sophistication, measured by the Goldsmiths Musical Sophistication Index (GMSI), and auditory cognitive measures across different testing environments. METHODS A total of 153 participants aged 50 to 86 years were recruited from local registries and memory clinics; 58 of these returned for web-based, follow-up assessments. An additional 89 participants from the PREVENT cohort were included in the web-based study, forming a combined sample. Participants completed speech-in-noise perception tasks (Digits-in-Noise and Speech-in-Babble), AuM tests for frequency and amplitude modulation rate, and the GMSI questionnaire. In-person testing was conducted in a soundproof room with standardized equipment, while web-based tests required participants to use headphones in a quiet room via a web-based app. The reproducibility of auditory measures was evaluated using Pearson and intraclass correlation coefficients, and statistical analyses assessed relationships between variables across settings. RESULTS Older participants and those with severe hearing loss were underrepresented in the web-based follow-up. The GMSI questionnaire demonstrated the highest reproducibility (r=0.82), while auditory cognitive tasks showed moderate reproducibility (Digits-in-Noise and Speech-in-Babble r=0.55 AuM tests for frequency r=0.75 and amplitude modulation rate r=0.44). There were no significant differences in the correlation between age and auditory measures across in-person and web-based settings (all P>.05). The study replicated previously reported associations between AuM and GMSI scores, as well as sentence-in-noise perception, indicating consistency across testing environments. CONCLUSIONS Web-based auditory cognitive testing is feasible and yields results comparable to in-person testing, especially for questionnaire-based measures like the GMSI. While auditory tasks demonstrated moderate reproducibility, the consistent replication of key associations suggests that web-based testing is a viable alternative for auditory cognition research. However, the underrepresentation of older adults and those with severe hearing loss highlights a need to address barriers to web-based participation. Future work should explore methods to enhance inclusivity, such as remote guided testing, and address factors like digital literacy and equipment standards to improve the representativeness and quality of web-based auditory research.
Collapse
Affiliation(s)
- Meher Lad
- Translational and Clinical Research Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - John-Paul Taylor
- Translational and Clinical Research Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
- NIHR Newcastle Biomedical Research Centre, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Timothy David Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
- Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
| |
Collapse
|
3
|
Viswanathan V, Heinz MG, Shinn-Cunningham BG. Impact of reduced spectral resolution on temporal-coherence-based source segregation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:3862-3876. [PMID: 39655945 PMCID: PMC11637563 DOI: 10.1121/10.0034545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 11/03/2024] [Accepted: 11/08/2024] [Indexed: 12/14/2024]
Abstract
Hearing-impaired listeners struggle to understand speech in noise, even when using cochlear implants (CIs) or hearing aids. Successful listening in noisy environments depends on the brain's ability to organize a mixture of sound sources into distinct perceptual streams (i.e., source segregation). In normal-hearing listeners, temporal coherence of sound fluctuations across frequency channels supports this process by promoting grouping of elements belonging to a single acoustic source. We hypothesized that reduced spectral resolution-a hallmark of both electric/CI (from current spread) and acoustic (from broadened tuning) hearing with sensorineural hearing loss-degrades segregation based on temporal coherence. This is because reduced frequency resolution decreases the likelihood that a single sound source dominates the activity driving any specific channel; concomitantly, it increases the correlation in activity across channels. Consistent with our hypothesis, our physiologically inspired computational model of temporal-coherence-based segregation predicts that CI current spread reduces comodulation masking release (CMR; a correlate of temporal-coherence processing) and speech intelligibility in noise. These predictions are consistent with our online behavioral data with simulated CI listening. Our model also predicts smaller CMR with increasing levels of outer-hair-cell damage. These results suggest that reduced spectral resolution relative to normal hearing impairs temporal-coherence-based segregation and speech-in-noise outcomes.
Collapse
Affiliation(s)
- Vibha Viswanathan
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA
| | - Michael G Heinz
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana 47907, USA
| | | |
Collapse
|
4
|
Grassi M, Felline A, Orlandi N, Toffanin M, Goli GP, Senyuva HA, Migliardi M, Contemori G. PSYCHOACOUSTICS-WEB: A free online tool for the estimation of auditory thresholds. Behav Res Methods 2024; 56:7465-7481. [PMID: 38709452 PMCID: PMC11362506 DOI: 10.3758/s13428-024-02430-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/12/2024] [Indexed: 05/07/2024]
Abstract
PSYCHOACOUSTICS-WEB is an online tool written in JavaScript and PHP that enables the estimation of auditory sensory thresholds via adaptive threshold tracking. The toolbox implements the transformed up-down methods proposed by Levitt (Journal of the Acoustical Society of America, 49, 467-477, (1971) for a set of classic psychoacoustical tasks: frequency, intensity, and duration discrimination of pure tones; duration discrimination and gap detection of noise; and amplitude modulation detection with noise carriers. The toolbox can be used through a common web browser; it works with both fixed and mobile devices, and requires no programming skills. PSYCHOACOUSTICS-WEB is suitable for laboratory, classroom, and online testing and is designed for two main types of users: an occasional user and, above all, an experimenter using the toolbox for their own research. This latter user can create a personal account, customise existing experiments, and share them in the form of direct links to further users (e.g., the participants of a hypothetical experiment). Finally, because data storage is centralised, the toolbox offers the potential for creating a database of auditory skills.
Collapse
Affiliation(s)
- Massimo Grassi
- Deparment of General Psychology, University of Padua, Via Venezia 8, 35131, Padua, Italy.
| | - Andrea Felline
- Department of Information Engineering, University of Padua, Padua, Italy
| | - Niccolò Orlandi
- Department of Information Engineering, University of Padua, Padua, Italy
| | - Mattia Toffanin
- Department of Information Engineering, University of Padua, Padua, Italy
| | - Gnana Prakash Goli
- Department of Information Engineering, University of Padua, Padua, Italy
| | - Hurcan Andrei Senyuva
- Deparment of General Psychology, University of Padua, Via Venezia 8, 35131, Padua, Italy
| | - Mauro Migliardi
- Department of Information Engineering, University of Padua, Padua, Italy
| | - Giulio Contemori
- Deparment of General Psychology, University of Padua, Via Venezia 8, 35131, Padua, Italy
| |
Collapse
|
5
|
Srinivasan N, Patro C, Kansangra R, Trotman A. Comparison of Psychometric Functions Measured Using Remote Testing and Laboratory Testing. Audiol Res 2024; 14:469-478. [PMID: 38804463 PMCID: PMC11130947 DOI: 10.3390/audiolres14030039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 05/17/2024] [Accepted: 05/18/2024] [Indexed: 05/29/2024] Open
Abstract
The use of remote testing to collect behavioral data has been on the rise, especially after the COVID-19 pandemic. Here we present psychometric functions for a commonly used speech corpus obtained in remote testing and laboratory testing conditions on young normal hearing listeners in the presence of different types of maskers. Headphone use for the remote testing group was checked by supplementing procedures from prior literature using a Huggins pitch task. Results revealed no significant differences in the measured thresholds using the remote testing and laboratory testing conditions for all the three masker types. Also, the thresholds measured obtained in these two conditions were strongly correlated for a different group of young normal hearing listeners. Based on the results, excellent outcomes on auditory threshold measurements where the stimuli are presented both at levels lower than and above an individual's speech-recognition threshold can be obtained by remotely testing the listeners.
Collapse
Affiliation(s)
- Nirmal Srinivasan
- Department of Speech-Language Pathology and Audiology, Towson University, Towson, MD 21252, USA; (C.P.); (R.K.); (A.T.)
| | | | | | | |
Collapse
|
6
|
Viswanathan V, Heinz MG, Shinn-Cunningham BG. Impact of Reduced Spectral Resolution on Temporal-Coherence-Based Source Segregation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.11.584489. [PMID: 38586037 PMCID: PMC10998286 DOI: 10.1101/2024.03.11.584489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Abstract
Hearing-impaired listeners struggle to understand speech in noise, even when using cochlear implants (CIs) or hearing aids. Successful listening in noisy environments depends on the brain's ability to organize a mixture of sound sources into distinct perceptual streams (i.e., source segregation). In normal-hearing listeners, temporal coherence of sound fluctuations across frequency channels supports this process by promoting grouping of elements belonging to a single acoustic source. We hypothesized that reduced spectral resolution-a hallmark of both electric/CI (from current spread) and acoustic (from broadened tuning) hearing with sensorineural hearing loss-degrades segregation based on temporal coherence. This is because reduced frequency resolution decreases the likelihood that a single sound source dominates the activity driving any specific channel; concomitantly, it increases the correlation in activity across channels. Consistent with our hypothesis, predictions from a physiologically plausible model of temporal-coherence-based segregation suggest that CI current spread reduces comodulation masking release (CMR; a correlate of temporal-coherence processing) and speech intelligibility in noise. These predictions are consistent with our behavioral data with simulated CI listening. Our model also predicts smaller CMR with increasing levels of outer-hair-cell damage. These results suggest that reduced spectral resolution relative to normal hearing impairs temporal-coherence-based segregation and speech-in-noise outcomes.
Collapse
Affiliation(s)
- Vibha Viswanathan
- Neuroscience Institute, Carnegie Mellon University, Pitttsburgh, PA 15213
| | - Michael G. Heinz
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN 47907
| | | |
Collapse
|
7
|
Roark CL, Thakkar V, Chandrasekaran B, Centanni TM. Auditory Category Learning in Children With Dyslexia. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:974-988. [PMID: 38354099 PMCID: PMC11001431 DOI: 10.1044/2023_jslhr-23-00361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 09/15/2023] [Accepted: 11/14/2023] [Indexed: 02/16/2024]
Abstract
PURPOSE Developmental dyslexia is proposed to involve selective procedural memory deficits with intact declarative memory. Recent research in the domain of category learning has demonstrated that adults with dyslexia have selective deficits in Information-Integration (II) category learning that is proposed to rely on procedural learning mechanisms and unaffected Rule-Based (RB) category learning that is proposed to rely on declarative, hypothesis testing mechanisms. Importantly, learning mechanisms also change across development, with distinct developmental trajectories in both procedural and declarative learning mechanisms. It is unclear how dyslexia in childhood should influence auditory category learning, a critical skill for speech perception and reading development. METHOD We examined auditory category learning performance and strategies in 7- to 12-year-old children with dyslexia (n = 25; nine females, 16 males) and typically developing controls (n = 25; 13 females, 12 males). Participants learned nonspeech auditory categories of spectrotemporal ripples that could be optimally learned with either RB selective attention to the temporal modulation dimension or procedural integration of information across spectral and temporal dimensions. We statistically compared performance using mixed-model analyses of variance and identified strategies using decision-bound computational models. RESULTS We found that children with dyslexia have an apparent selective RB category learning deficit, rather than a selective II learning deficit observed in prior work in adults with dyslexia. CONCLUSION These results suggest that the important skill of auditory category learning is impacted in children with dyslexia and throughout development, individuals with dyslexia may develop compensatory strategies that preserve declarative learning while developing difficulties in procedural learning. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25148519.
Collapse
Affiliation(s)
- Casey L. Roark
- Department of Communication Science and Disorders, University of Pittsburgh, PA
- Center for the Neural Basis of Cognition, University of Pittsburgh, Carnegie Mellon University, PA
| | - Vishal Thakkar
- Department of Psychology, Texas Christian University, Fort Worth
| | - Bharath Chandrasekaran
- Department of Communication Science and Disorders, University of Pittsburgh, PA
- Center for the Neural Basis of Cognition, University of Pittsburgh, Carnegie Mellon University, PA
| | | |
Collapse
|
8
|
Singh R, Bharadwaj HM. Cortical temporal integration can account for limits of temporal perception: investigations in the binaural system. Commun Biol 2023; 6:981. [PMID: 37752215 PMCID: PMC10522716 DOI: 10.1038/s42003-023-05361-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 09/15/2023] [Indexed: 09/28/2023] Open
Abstract
The auditory system has exquisite temporal coding in the periphery which is transformed into a rate-based code in central auditory structures, like auditory cortex. However, the cortex is still able to synchronize, albeit at lower modulation rates, to acoustic fluctuations. The perceptual significance of this cortical synchronization is unknown. We estimated physiological synchronization limits of cortex (in humans with electroencephalography) and brainstem neurons (in chinchillas) to dynamic binaural cues using a novel system-identification technique, along with parallel perceptual measurements. We find that cortex can synchronize to dynamic binaural cues up to approximately 10 Hz, which aligns well with our measured limits of perceiving dynamic spatial information and utilizing dynamic binaural cues for spatial unmasking, i.e. measures of binaural sluggishness. We also find that the tracking limit for frequency modulation (FM) is similar to the limit for spatial tracking, demonstrating that this sluggish tracking is a more general perceptual limit that can be accounted for by cortical temporal integration limits.
Collapse
Affiliation(s)
- Ravinderjit Singh
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Hari M Bharadwaj
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA.
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, USA.
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| |
Collapse
|