1
|
Bidelman GM, York A, Pearson C. Neural correlates of phonetic categorization under auditory (phoneme) and visual (grapheme) modalities. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.07.24.604940. [PMID: 39211275 PMCID: PMC11361091 DOI: 10.1101/2024.07.24.604940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
We tested whether the neural mechanisms of phonetic categorization are specific to speech sounds or generalize to graphemes (i.e., visual letters) of the same phonetic label. Given that linguistic experience shapes categorical processing, and letter-speech sound matching plays a crucial role during early reading acquisition, we hypothesized sound phoneme and visual grapheme tokens representing the same linguistic identity might recruit common neural substrates, despite originating from different sensory modalities. Behavioral and neuroelectric brain responses (ERPs) were acquired as participants categorized stimuli from sound (phoneme) and homologous letter (grapheme) continua each spanning a /da/ - /ga/ gradient. Behaviorally, listeners were faster and showed stronger categorization of phoneme compared to graphemes. At the neural level, multidimensional scaling of the EEG revealed responses self-organized in a categorial fashion such that tokens clustered within their respective modality beginning ∼150-250 ms after stimulus onset. Source-resolved ERPs further revealed modality-specific and overlapping brain regions supporting phonetic categorization. Left inferior frontal gyrus and auditory cortex showed stronger responses for sound category members compared to phonetically ambiguous tokens, whereas early visual cortices paralleled this categorical organization for graphemes. Auditory and visual categorization also recruited common visual association areas in extrastriate cortex but in opposite hemispheres (auditory = left; visual=right). Our findings reveal both auditory and visual sensory cortex supports categorical organization for phonetic labels within their respective modalities. However, a partial overlap in phoneme and grapheme processing among occipital brain areas implies the presence of an isomorphic, domain-general mapping for phonetic categories in dorsal visual system.
Collapse
|
2
|
Ignatiadis K, Baier D, Barumerli R, Sziller I, Tóth B, Baumgartner R. Cortical signatures of auditory looming bias show cue-specific adaptation between newborns and young adults. COMMUNICATIONS PSYCHOLOGY 2024; 2:56. [PMID: 38859821 PMCID: PMC11163589 DOI: 10.1038/s44271-024-00105-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 05/27/2024] [Indexed: 06/12/2024]
Abstract
Adaptive biases in favor of approaching, or "looming", sounds have been found across ages and species, thereby implicating the potential of their evolutionary origin and universal basis. The human auditory system is well-developed at birth, yet spatial hearing abilities further develop with age. To disentangle the speculated inborn, evolutionary component of the auditory looming bias from its learned counterpart, we collected high-density electroencephalographic data across human adults and newborns. As distance-motion cues we manipulated either the sound's intensity or spectral shape, which is pinna-induced and thus prenatally inaccessible. Through cortical source localisation we demonstrated the emergence of the bias in both age groups at the level of Heschl's gyrus. Adults exhibited the bias in both attentive and inattentive states; yet differences in amplitude and latency appeared based on attention and cue type. Contrary to the adults, in newborns the bias was elicited only through manipulations of intensity and not spectral cues. We conclude that the looming bias comprises innate components while flexibly incorporating the spatial cues acquired through lifelong exposure.
Collapse
Affiliation(s)
| | - Diane Baier
- Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - Roberto Barumerli
- Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - István Sziller
- Division of Obstetrics and Gynaecology, DBC, Szent Imre University Teaching Hospital, Budapest, Hungary
| | - Brigitta Tóth
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| | - Robert Baumgartner
- Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| |
Collapse
|
3
|
Stodt B, Neudek D, Getzmann S, Wascher E, Martin R. Comparing auditory distance perception in real and virtual environments and the role of the loudness cue: A study based on event-related potentials. Hear Res 2024; 444:108968. [PMID: 38350176 DOI: 10.1016/j.heares.2024.108968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 01/12/2024] [Accepted: 02/02/2024] [Indexed: 02/15/2024]
Abstract
The perception of the distance to a sound source is relevant in many everyday situations, not only in real spaces, but also in virtual reality (VR) environments. Where real rooms often reach their limits, VR offers far-reaching possibilities to simulate a wide range of acoustic scenarios. However, in virtual room acoustics a plausible reproduction of distance-related cues can be challenging. In the present study, we compared the detection of changes of the distance to a sound source and its neurocognitive correlates in a real and a virtual reverberant environment, using an active auditory oddball paradigm and EEG measures. The main goal was to test whether the experiments in the virtual and real environments produced equivalent behavioral and EEG results. Three loudspeakers were placed at ego-centric distances of 2 m (near), 4 m (center), and 8 m (far) in front of the participants (N = 20), each 66 cm below their ear level. Sequences of 500 ms noise stimuli were presented either from the center position (standards, 80 % of trials) or from the near or far position (targets, 10 % each). The participants had to indicate a target position via a joystick response ("near" or "far"). Sounds were emitted either by real loudspeakers in the real environment or rendered and played back for the corresponding positions via headphones in the virtual environment. In addition, within both environments, loudness of the auditory stimuli was either unaltered (natural loudness) or the loudness cue was manipulated, so that all three loudspeakers were perceived equally loud at the listener's position (matched loudness). The EEG analysis focused on the mismatch negativity (MMN), P3a, and P3b as correlates of deviance detection, attentional orientation, and context-updating/stimulus evaluation, respectively. Overall, behavioral data showed that detection of the target positions was reduced within the virtual environment, and especially when loudness was matched. Except for slight latency shifts in the virtual environment, EEG analysis indicated comparable patterns within both environments and independent of loudness settings. Thus, while the neurocognitive processing of changes in distance appears to be similar in virtual and real spaces, a proper representation of loudness appears to be crucial to achieve a good task performance in virtual acoustic environments.
Collapse
Affiliation(s)
- Benjamin Stodt
- Leibniz Research Centre for Working Environment and Human Factors at the TU Dortmund (IfADo), Ardeystraße 67, Dortmund 44139, Germany.
| | - Daniel Neudek
- Institute of Communication Acoustics, Ruhr-Universität Bochum, Universitätsstraße 150, Bochum 44780, Germany
| | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors at the TU Dortmund (IfADo), Ardeystraße 67, Dortmund 44139, Germany
| | - Edmund Wascher
- Leibniz Research Centre for Working Environment and Human Factors at the TU Dortmund (IfADo), Ardeystraße 67, Dortmund 44139, Germany
| | - Rainer Martin
- Institute of Communication Acoustics, Ruhr-Universität Bochum, Universitätsstraße 150, Bochum 44780, Germany
| |
Collapse
|
4
|
Kim T, Chung M, Jeong E, Cho YS, Kwon OS, Kim SP. Cortical representation of musical pitch in event-related potentials. Biomed Eng Lett 2023; 13:441-454. [PMID: 37519879 PMCID: PMC10382469 DOI: 10.1007/s13534-023-00274-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 03/14/2023] [Accepted: 03/18/2023] [Indexed: 08/01/2023] Open
Abstract
Neural coding of auditory stimulus frequency is well-documented; however, the cortical signals and perceptual correlates of pitch have not yet been comprehensively investigated. This study examined the temporal patterns of event-related potentials (ERP) in response to single tones of pitch chroma, with an assumption that these patterns would be more prominent in musically-trained individuals than in non-musically-trained individuals. Participants with and without musical training (N = 20) were presented with seven notes on the C major scale (C4, D4, E4, F4, G4, A4, and B4), and whole-brain activities were recorded. A linear regression analysis between the ERP amplitude and the seven notes showed that the ERP amplitude increased or decreased as the frequency of the pitch increased. Remarkably, these linear correlations were anti-symmetric between the hemispheres. Specifically, we found that ERP amplitudes of the left and right frontotemporal areas decreased and increased, respectively, as the pitch frequency increased. Although linear slopes were significant in both groups, the musically-trained group exhibited marginally steeper slope, and their ERP amplitudes were most discriminant for frequency of tone of pitch at earlier latency than in the non-musically-trained group (~ 460 ms vs ~ 630 ms after stimulus onset). Thus, the ERP amplitudes in frontotemporal areas varied according to the pitch frequency, with the musically-trained participants demonstrating a wider range of amplitudes and inter-hemispheric anti-symmetric patterns. Our findings may provide new insights on cortical processing of musical pitch, revealing anti-symmetric processing of musical pitch between hemispheres, which appears to be more pronounced in musically-trained people. Supplementary Information The online version contains supplementary material available at 10.1007/s13534-023-00274-y.
Collapse
Affiliation(s)
- Taehyoung Kim
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
| | - Miyoung Chung
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
| | - Eunju Jeong
- Department of Music and Science for Clinical Practice, College of Interdisciplinary Industrial Studies, Hanyang University, Seoul, Republic of Korea
| | - Yang Seok Cho
- School of Psychology, Korea University, Seoul, Republic of Korea
| | - Oh-Sang Kwon
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
| | - Sung-Phil Kim
- Department of Biomedical Engineering, Ulsan National Institute of Science and Technology, Ulsan, Republic of Korea
| |
Collapse
|
5
|
Meng J, Li X, Zhao Y, Li R, Xu M, Ming D. Modality-Attention Promotes the Neural Effects of Precise Timing Prediction in Early Sensory Processing. Brain Sci 2023; 13:brainsci13040610. [PMID: 37190575 DOI: 10.3390/brainsci13040610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 03/05/2023] [Accepted: 03/25/2023] [Indexed: 04/07/2023] Open
Abstract
Precise timing prediction (TP) enables the brain to accurately predict the occurrence of upcoming events in millisecond timescale, which is fundamental for adaptive behaviors. The neural effect of the TP within a single sensory modality has been widely studied. However, less is known about how precise TP works when the brain is concurrently faced with multimodality sensory inputs. Modality attention (MA) is a crucial cognitive function for dealing with the overwhelming information induced by multimodality sensory inputs. Therefore, it is necessary to investigate whether and how the MA influences the neural effects of the precise TP. This study designed a visual–auditory temporal discrimination task, in which the MA was allocated to visual or auditory modality, and the TP was manipulated into no timing prediction (NTP), matched timing prediction (MTP), and violated timing prediction (VTP) conditions. Behavioral and electroencephalogram (EEG) data were recorded from 27 subjects, event-related potentials (ERP), time–frequency distributions of inter-trial coherence (ITC), and event-related spectral perturbation (ERSP) were analyzed. In the visual modality, precise TP led to N1 amplitude variations and 200–400 ms theta ITC. Such variations only emerged when the MA was attended. In auditory modality, the MTP had the largest P2 amplitude and delta ITC than other TP conditions when the MA was attended, whereas the distinctions disappeared when the MA was unattended. The results suggest that the MA promoted the neural effects of the precise TP in early sensory processing, which provides more neural evidence for better understanding the interactions between the TP and MA.
Collapse
Affiliation(s)
- Jiayuan Meng
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, China
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, China
| | - Xiaoyu Li
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, China
| | - Yingru Zhao
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, China
| | - Rong Li
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, China
| | - Minpeng Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, China
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, China
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, China
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, China
| |
Collapse
|
6
|
Han JH, Lee J, Lee HJ. The effect of noise on the cortical activity patterns of speech processing in adults with single-sided deafness. Front Neurol 2023; 14:1054105. [PMID: 37006498 PMCID: PMC10060629 DOI: 10.3389/fneur.2023.1054105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 02/27/2023] [Indexed: 03/18/2023] Open
Abstract
The most common complaint in people with single-sided deafness (SSD) is difficulty in understanding speech in a noisy environment. Moreover, the neural mechanism of speech-in-noise (SiN) perception in SSD individuals is still poorly understood. In this study, we measured the cortical activity in SSD participants during a SiN task to compare with a speech-in-quiet (SiQ) task. Dipole source analysis revealed left hemispheric dominance in both left- and right-sided SSD group. Contrary to SiN listening, this hemispheric difference was not found during SiQ listening in either group. In addition, cortical activation in the right-sided SSD individuals was independent of the location of sound whereas activation sites in the left-sided SSD group were altered by the sound location. Examining the neural-behavioral relationship revealed that N1 activation is associated with the duration of deafness and the SiN perception ability of individuals with SSD. Our findings indicate that SiN listening is processed differently in the brains of left and right SSD individuals.
Collapse
Affiliation(s)
- Ji-Hye Han
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Republic of Korea
| | - Jihyun Lee
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Republic of Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain and Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Republic of Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Republic of Korea
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon, Republic of Korea
- *Correspondence: Hyo-Jeong Lee
| |
Collapse
|
7
|
Bidelman GM, Chow R, Noly-Gandon A, Ryan JD, Bell KL, Rizzi R, Alain C. Transcranial Direct Current Stimulation Combined With Listening to Preferred Music Alters Cortical Speech Processing in Older Adults. Front Neurosci 2022; 16:884130. [PMID: 35873829 PMCID: PMC9298650 DOI: 10.3389/fnins.2022.884130] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 06/17/2022] [Indexed: 11/13/2022] Open
Abstract
Emerging evidence suggests transcranial direct current stimulation (tDCS) can improve cognitive performance in older adults. Similarly, music listening may improve arousal and stimulate subsequent performance on memory-related tasks. We examined the synergistic effects of tDCS paired with music listening on auditory neurobehavioral measures to investigate causal evidence of short-term plasticity in speech processing among older adults. In a randomized sham-controlled crossover study, we measured how combined anodal tDCS over dorsolateral prefrontal cortex (DLPFC) paired with listening to autobiographically salient music alters neural speech processing in older adults compared to either music listening (sham stimulation) or tDCS alone. EEG assays included both frequency-following responses (FFRs) and auditory event-related potentials (ERPs) to trace neuromodulation-related changes at brainstem and cortical levels. Relative to music without tDCS (sham), we found tDCS alone (without music) modulates the early cortical neural encoding of speech in the time frame of ∼100-150 ms. Whereas tDCS by itself appeared to largely produce suppressive effects (i.e., reducing ERP amplitude), concurrent music with tDCS restored responses to those of the music+sham levels. However, the interpretation of this effect is somewhat ambiguous as this neural modulation could be attributable to a true effect of tDCS or presence/absence music. Still, the combined benefit of tDCS+music (above tDCS alone) was correlated with listeners' education level suggesting the benefit of neurostimulation paired with music might depend on listener demographics. tDCS changes in speech-FFRs were not observed with DLPFC stimulation. Improvements in working memory pre to post session were also associated with better speech-in-noise listening skills. Our findings provide new causal evidence that combined tDCS+music relative to tDCS-alone (i) modulates the early (100-150 ms) cortical encoding of speech and (ii) improves working memory, a cognitive skill which may indirectly bolster noise-degraded speech perception in older listeners.
Collapse
Affiliation(s)
- Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University Bloomington, Bloomington, IN, United States
- School of Communication Sciences and Disorders, The University of Memphis, Memphis, TN, United States
| | - Ricky Chow
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada
| | | | - Jennifer D. Ryan
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Department of Psychiatry, University of Toronto, Toronto, ON, Canada
- Institute of Medical Science, University of Toronto, Toronto, ON, Canada
| | - Karen L. Bell
- Department of Audiology, San José State University, San Jose, CA, United States
| | - Rose Rizzi
- Department of Speech, Language and Hearing Sciences, Indiana University Bloomington, Bloomington, IN, United States
- School of Communication Sciences and Disorders, The University of Memphis, Memphis, TN, United States
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Institute of Medical Science, University of Toronto, Toronto, ON, Canada
- Music and Health Science Research Collaboratory, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
8
|
Wu Z, Bao X, Liu L, Li L. Looming Effects on Attentional Modulation of Prepulse Inhibition Paradigm. Front Psychol 2021; 12:740363. [PMID: 34867622 PMCID: PMC8634448 DOI: 10.3389/fpsyg.2021.740363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 10/11/2021] [Indexed: 11/13/2022] Open
Abstract
In a hazardous environment, it is fundamentally important to successfully evaluate the motion of sounds. Previous studies demonstrated "auditory looming bias" in both macaques and humans, as looming sounds that increased in intensity were processed preferentially by the brain. In this study on rats, we used a prepulse inhibition (PPI) of the acoustic startle response paradigm to investigate whether auditory looming sound with intrinsic warning value could draw attention of the animals and dampen the startle reflex caused by the startling noise. We showed looming sound with a duration of 120 ms enhanced PPI compared with receding sound with the same duration; however, when both sound types were at shorter duration/higher change rate (i.e., 30 ms) or longer duration/lower rate (i.e., more than 160 ms), there was no PPI difference. This indicates that looming sound-induced PPI enhancement was duration dependent. We further showed that isolation rearing impaired the abilities of animals to differentiate looming and receding prepulse stimuli, although it did not abolish their discrimination between looming and stationary prepulse stimuli. This suggests that isolation rearing compromised their assessment of potential threats from approaching objects and receding objects.
Collapse
Affiliation(s)
- Zhemeng Wu
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | | | | | - Liang Li
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
9
|
Ignatiadis K, Baier D, Tóth B, Baumgartner R. Neural Mechanisms Underlying the Auditory Looming Bias. AUDITORY PERCEPTION & COGNITION 2021; 4:60-73. [PMID: 35494218 PMCID: PMC7612677 DOI: 10.1080/25742442.2021.1977582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Our auditory system constantly keeps track of our environment, informing us about our surroundings and warning us of potential threats. The auditory looming bias is an early perceptual phenomenon, reflecting higher alertness of listeners to approaching auditory objects, rather than to receding ones. Experimentally, this sensation has been elicited by using both intensity-varying stimuli, as well as spectrally varying stimuli with constant intensity. Following the intensity-based approach, recent research delving into the cortical mechanisms underlying the looming bias argues for top-down signaling from the prefrontal cortex to the auditory cortex in order to prioritize approaching over receding sonic motion. We here test the generalizability of that finding to spectrally induced looms by re-analyzing previously published data. Our results indicate the promoted top-down projection but at time points slightly preceding the motion onset and thus considered to reflect a bias driven by anticipation. At time points following the motion onset, our findings show a bottom-up bias along the dorsal auditory pathway directed toward the prefrontal cortex.
Collapse
Affiliation(s)
| | - Diane Baier
- Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - Brigitta Tóth
- Center for Natural Sciences, Institute of Cognitive Neuroscience and Psychology, Budapest, Hungary
- Faculty of Education and Psychology, Eötvös Loránd University, Budapest, Hungary
| | - Robert Baumgartner
- Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| |
Collapse
|
10
|
Slow Resting State Fluctuations Enhance Neuronal and Behavioral Responses to Looming Sounds. Brain Topogr 2021; 35:121-141. [PMID: 33768383 DOI: 10.1007/s10548-021-00826-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Accepted: 02/17/2021] [Indexed: 01/01/2023]
Abstract
We investigate both experimentally and using a computational model how the power of the electroencephalogram (EEG) recorded in human subjects tracks the presentation of sounds with acoustic intensities that increase exponentially (looming) or remain constant (flat). We focus on the link between this EEG tracking response, behavioral reaction times and the time scale of fluctuations in the resting state, which show considerable inter-subject variability. Looming sounds are shown to generally elicit a sustained power increase in the alpha and beta frequency bands. In contrast, flat sounds only elicit a transient upsurge at frequencies ranging from 7 to 45 Hz. Likewise, reaction times (RTs) in an audio-tactile task at different latencies from sound onset also present significant differences between sound types. RTs decrease with increasing looming intensities, i.e. as the sense of urgency increases, but remain constant with stationary flat intensities. We define the reaction time variation or "gain" during looming sound presentation, and show that higher RT gains are associated with stronger correlations between EEG power responses and sound intensity. Higher RT gain further entails higher relative power differences between loom and flat in the alpha and beta bands. The full-width-at-half-maximum of the autocorrelation function of the eyes-closed resting state EEG also increases with RT gain. The effects are topographically located over the central and frontal electrodes. A computational model reveals that the increase in stimulus-response correlation in subjects with slower resting state fluctuations is expected when EEG power fluctuations at each electrode and in a given band are viewed as simple coupled low-pass filtered noise processes jointly driven by the sound intensity. The model assumes that the strength of stimulus-power coupling is proportional to RT gain in different coupling scenarios, suggesting a mechanism by which slower resting state fluctuations enhance EEG response and shorten reaction times.
Collapse
|