1
|
Sobczak GG, Zhou X, Moore LE, Bolt DM, Litovsky RY. Cortical mechanisms of across-ear speech integration investigated using functional near-infrared spectroscopy (fNIRS). PLoS One 2024; 19:e0307158. [PMID: 39292701 PMCID: PMC11410267 DOI: 10.1371/journal.pone.0307158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Accepted: 07/02/2024] [Indexed: 09/20/2024] Open
Abstract
This study aimed to investigate integration of alternating speech, a stimulus which classically produces a V-shaped speech intelligibility function with minimum at 2-6 Hz in typical-hearing (TH) listeners. We further studied how degraded speech impacts intelligibility across alternating rates (2, 4, 8, and 32 Hz) using vocoded speech, either in the right ear or bilaterally, to simulate single-sided deafness with a cochlear implant (SSD-CI) and bilateral CIs (BiCI), respectively. To assess potential cortical signatures of across-ear integration, we recorded activity in the bilateral auditory cortices (AC) and dorsolateral prefrontal cortices (DLPFC) during the task using functional near-infrared spectroscopy (fNIRS). For speech intelligibility, the V-shaped function was reproduced only in the BiCI condition; TH (with ceiling scores) and SSD-CI conditions had significantly higher scores across all alternating rates compared to the BiCI condition. For fNIRS, the AC and DLPFC exhibited significantly different activity across alternating rates in the TH condition, with altered activity patterns in both regions in the SSD-CI and BiCI conditions. Our results suggest that degraded speech inputs in one or both ears impact across-ear integration and that different listening strategies were employed for speech integration manifested as differences in cortical activity across conditions.
Collapse
Affiliation(s)
- Gabriel G Sobczak
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States of America
| | - Xin Zhou
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States of America
| | - Liberty E Moore
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States of America
| | - Daniel M Bolt
- Department of Educational Psychology, University of Wisconsin-Madison, Madison, WI, United States of America
| | - Ruth Y Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States of America
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, United States of America
- Department of Surgery, Division of Otolaryngology, University of Wisconsin-Madison, Madison, WI, United States of America
| |
Collapse
|
2
|
Grevisse D, Watorek M, Heidlmayr K, Isel F. Processing of complex morphosyntactic structures in French: ERP evidence from native speakers. Brain Cogn 2023; 171:106062. [PMID: 37473640 DOI: 10.1016/j.bandc.2023.106062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 07/05/2023] [Accepted: 07/07/2023] [Indexed: 07/22/2023]
Abstract
This event-related brain potentials (ERP) study investigated the neurocognitive mechanisms underlying the auditory processing of verbal complexity in French illustrated by the prescriptive present subjunctive mode. Using a violation paradigm, ERPs of 32 French native speakers were continuously recorded while they listened to 200 ecological French sentences selected from the INTEFRA oral corpus (2006). Participants performed an offline acceptability judgement task on each sentence, half of which contained a correct present subjunctive verbal agreement (reçoive) and the other half an incorrect present indicative one (peut). Critically, the present subjunctive mode was triggered either by verbs (Ma mère desire que j'apprenneMy mother wants me to learn) or by subordinating conjunctions (Pour qu'elle reçoiveSo that she receives). We found a delayed anterior negativity (AN) due to the length of the verbal forms and a P600 that were larger for incongruent than for congruent verbal agreement in the same time window. While the two effects were left lateralized for subordinating conjunctions, they were right lateralized for both structures with a larger effect for subordinating conjunctions than for verbs. Moreover, our data revealed that the AN/P600 pattern was larger in late position than in early ones. Taken together, these results suggest that morphosyntactic complexity conveyed by the French subjunctive involves at least two neurocognitive processes thought to support an initial morphosyntactic analysis (AN) and a syntactic revision and repair (posterior P600). These two processes may be modulated as a function of both the element (i.e., subordinating conjunction vs verb) that triggers the subjunctive mode and the moment at which this element is used while sentence processing.
Collapse
Affiliation(s)
- Daniel Grevisse
- Université Paris 8, Laboratoire Structures formelles du langage, CNRS, UMR 7023, France.
| | - Marzena Watorek
- Université Paris 8, Laboratoire Structures formelles du langage, CNRS, UMR 7023, France
| | - Karin Heidlmayr
- Université Paris Nanterre, Laboratoire Modèles, Dynamiques, Corpus, CNRS, UMR 7114, France
| | - Frédéric Isel
- Université Paris Nanterre, Laboratoire Modèles, Dynamiques, Corpus, CNRS, UMR 7114, France
| |
Collapse
|
3
|
Ding R, Tang H, Liu Y, Yin Y, Yan B, Jiang Y, Toussaint PJ, Xia Y, Evans AC, Zhou D, Hao X, Lu J, Yao D. Therapeutic effect of tempo in Mozart's "Sonata for two pianos" (K. 448) in patients with epilepsy: An electroencephalographic study. Epilepsy Behav 2023; 145:109323. [PMID: 37356223 DOI: 10.1016/j.yebeh.2023.109323] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 06/08/2023] [Accepted: 06/09/2023] [Indexed: 06/27/2023]
Abstract
BACKGROUND Mozart's "Sonata for two pianos" (Köchel listing 448) has proven effective as music therapy for patients with epilepsy, but little is understood about the mechanism of which feature in it impacted therapeutic effect. This study explored whether tempo in that piece is important for its therapeutic effect. METHODS We measured the effects of tempo in Mozart's sonata on clinical and electroencephalographic parameters of 147 patients with epilepsy who listened to the music at slow, original, or accelerated speed. As a control, patients listened to Haydn's Symphony no. 94 at original speed. RESULTS Listening to Mozart's piece at original speed significantly reduced the number of interictal epileptic discharges. It decreased beta power in the frontal, parietal, and occipital regions, suggesting increased auditory attention and reduced visual attention. It also decreased functional connectivity among frontal, parietal, temporal, and occipital brain regions, also suggesting increased auditory attention and reduced visual attention. No such effects were observed after patients listened to the slow or fast version of Mozart's piece, or to Haydn's symphony at normal speed. CONCLUSIONS These results suggest that Mozart's "Sonata for two pianos" may exert therapeutic effects by regulating attention when played at its original tempo, but not slower or faster. These findings may help guide the design and optimization of music therapy against epilepsy.
Collapse
Affiliation(s)
- Rui Ding
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, Sichuan, China; Center for Information in Medicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, Sichuan, China; Montreal Neurological Institute, McGill University, Montreal, QC, Canada, H3A 2B4.
| | - Huajuan Tang
- Department of Neurology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan, China; Department of Neurology, 363 Hospital, Chengdu 610041, Sichuan, China.
| | - Ying Liu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, Sichuan, China; Center for Information in Medicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, Sichuan, China.
| | - Yitian Yin
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, Sichuan, China; Center for Information in Medicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, Sichuan, China.
| | - Bo Yan
- Department of Neurology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan, China.
| | - Yingqi Jiang
- Department of Neurology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan, China.
| | - Paule-J Toussaint
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada, H3A 2B4.
| | - Yang Xia
- Center for Information in Medicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, Sichuan, China.
| | - Alan C Evans
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada, H3A 2B4.
| | - Dong Zhou
- Department of Neurology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan, China.
| | - Xiaoting Hao
- Department of Neurology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan, China.
| | - Jing Lu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, Sichuan, China; Center for Information in Medicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, Sichuan, China.
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, Sichuan, China; Center for Information in Medicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, Sichuan, China.
| |
Collapse
|
4
|
Osawa SI, Suzuki K, Asano E, Ukishiro K, Agari D, Kakinuma K, Kochi R, Jin K, Nakasato N, Tominaga T. Causal Involvement of Medial Inferior Frontal Gyrus of Non-dominant Hemisphere in Higher Order Auditory Perception: A single case study. Cortex 2023; 163:57-65. [PMID: 37060887 DOI: 10.1016/j.cortex.2023.02.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 10/12/2022] [Accepted: 02/13/2023] [Indexed: 03/31/2023]
Abstract
The medial side of the operculum is invisible from the lateral surface of cerebral cortex, and its functions remain largely unexplored using direct evidence. Non-invasive and invasive studies have proved functions on peri-sylvian area including the inferior frontal gyrus (IFG) and superior temporal gyrus within the language-dominant hemisphere for semantic processing during verbal communication. However, within the non-dominant hemisphere, there was less evidence of its functions except for pitch or prosody processing. Here we add direct evidence for the functions of the non-dominant hemisphere, the causal involvement of the medial IFG for subjective auditory perception, which is affected by the context of the condition, regarded as a contribution in higher order auditory perception. The phenomenon was clearly distinguished from absolute and invariant pitch perception which is regarded as lower order auditory perception. Electrical stimulation of the medial surface of pars triangularis of IFG in non-dominant hemisphere via depth electrode in an epilepsy patient rapidly and reproducibly elicited perception of pitch changes of auditory input. Pitches were perceived as either higher or lower than those given without stimulation and there was no selectivity for sound type. The patient perceived sounds as higher when she had greater control over the situation when her eyes were open and there were self-cues, and as lower when her eyes were closed and there were investigator-cues. Time-frequency analysis of electrocorticography signals during auditory naming demonstrated medial IFG activation, characterized by low-gamma band augmentation during her own vocal response. The overall evidence provides a neural substrate for altered perception of other vocal tones according to the condition context.
Collapse
|
5
|
Liao HI, Fujihira H, Yamagishi S, Yang YH, Furukawa S. Seeing an Auditory Object: Pupillary Light Response Reflects Covert Attention to Auditory Space and Object. J Cogn Neurosci 2023; 35:276-290. [PMID: 36306257 DOI: 10.1162/jocn_a_01935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
Attention to the relevant object and space is the brain's strategy to effectively process the information of interest in complex environments with limited neural resources. Numerous studies have documented how attention is allocated in the visual domain, whereas the nature of attention in the auditory domain has been much less explored. Here, we show that the pupillary light response can serve as a physiological index of auditory attentional shift and can be used to probe the relationship between space-based and object-based attention as well. Experiments demonstrated that the pupillary response corresponds to the luminance condition where the attended auditory object (e.g., spoken sentence) was located, regardless of whether attention was directed by a spatial (left or right) or nonspatial (e.g., the gender of the talker) cue and regardless of whether the sound was presented via headphones or loudspeakers. These effects on the pupillary light response could not be accounted for as a consequence of small (although observable) biases in gaze position drifting. The overall results imply a unified audiovisual representation of spatial attention. Auditory object-based attention contains the space representation of the attended auditory object, even when the object is oriented without explicit spatial guidance.
Collapse
Affiliation(s)
- Hsin-I Liao
- NTT Communication Science Laboratories, Japan
| | - Haruna Fujihira
- NTT Communication Science Laboratories, Japan.,Japan Society for the Promotion of Science
| | | | | | | |
Collapse
|
6
|
Curtis MT, Ren X, Coffman BA, Salisbury DF. Attentional M100 gain modulation localizes to auditory sensory cortex and is deficient in first-episode psychosis. Hum Brain Mapp 2022; 44:218-228. [PMID: 36073535 PMCID: PMC9783396 DOI: 10.1002/hbm.26067] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Revised: 08/22/2022] [Accepted: 08/23/2022] [Indexed: 02/05/2023] Open
Abstract
Selective attention is impaired in first-episode psychosis (FEP). Selective attention effects can be detected during auditory tasks as increased sensory activity. We previously reported electroencephalography scalp-measured N100 enhancement is reduced in FEP. Here, we localized magnetoencephalography (MEG) M100 source activity within the auditory cortex, making novel use of the Human Connectome Project multimodal parcellation (HCP-MMP) to identify precise auditory cortical areas involved in attention modulation and its impairment in FEP. MEG was recorded from 27 FEP and 31 matched healthy controls (HC) while individuals either ignored frequent standard and rare oddball tones while watching a silent movie or attended tones by pressing a button to oddballs. Because M100 arises mainly in the auditory cortices, MEG activity during the M100 interval was projected to the auditory sensory cortices defined by the HCP-MMP (A1, lateral belt, and parabelt parcels). FEP had less auditory sensory cortex M100 activity in both conditions. In addition, there was a significant interaction between group and attention. HC enhanced source activity with attention, but FEP did not. These results demonstrate deficits in both sensory processing and attentional modulation of the M100 in FEP. Novel use of the HCP-MMP revealed the precise cortical areas underlying attention modulation of auditory sensory activity in healthy individuals and impairments in FEP. The sensory reduction and attention modulation impairment indicate local and systems-level pathophysiology proximal to disease onset that may be critical for etiology. Further, M100 and N100 enhancement may serve as outcome variables for targeted intervention to improve attention in early psychosis.
Collapse
Affiliation(s)
- Mark T. Curtis
- Clinical Neurophysiology Research Laboratory, Western Psychiatric Hospital, Department of PsychiatryUniversity of Pittsburgh School of MedicinePittsburghPennsylvaniaUSA
| | - Xi Ren
- Clinical Neurophysiology Research Laboratory, Western Psychiatric Hospital, Department of PsychiatryUniversity of Pittsburgh School of MedicinePittsburghPennsylvaniaUSA
| | - Brian A. Coffman
- Clinical Neurophysiology Research Laboratory, Western Psychiatric Hospital, Department of PsychiatryUniversity of Pittsburgh School of MedicinePittsburghPennsylvaniaUSA
| | - Dean F. Salisbury
- Clinical Neurophysiology Research Laboratory, Western Psychiatric Hospital, Department of PsychiatryUniversity of Pittsburgh School of MedicinePittsburghPennsylvaniaUSA
| |
Collapse
|
7
|
Gugnowska K, Novembre G, Kohler N, Villringer A, Keller PE, Sammler D. Endogenous sources of interbrain synchrony in duetting pianists. Cereb Cortex 2022; 32:4110-4127. [PMID: 35029645 PMCID: PMC9476614 DOI: 10.1093/cercor/bhab469] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 11/16/2021] [Accepted: 11/17/2021] [Indexed: 11/12/2022] Open
Abstract
When people interact with each other, their brains synchronize. However, it remains unclear whether interbrain synchrony (IBS) is functionally relevant for social interaction or stems from exposure of individual brains to identical sensorimotor information. To disentangle these views, the current dual-EEG study investigated amplitude-based IBS in pianists jointly performing duets containing a silent pause followed by a tempo change. First, we manipulated the similarity of the anticipated tempo change and measured IBS during the pause, hence, capturing the alignment of purely endogenous, temporal plans without sound or movement. Notably, right posterior gamma IBS was higher when partners planned similar tempi, it predicted whether partners' tempi matched after the pause, and it was modulated only in real, not in surrogate pairs. Second, we manipulated the familiarity with the partner's actions and measured IBS during joint performance with sound. Although sensorimotor information was similar across conditions, gamma IBS was higher when partners were unfamiliar with each other's part and had to attend more closely to the sound of the performance. These combined findings demonstrate that IBS is not merely an epiphenomenon of shared sensorimotor information but can also hinge on endogenous, cognitive processes crucial for behavioral synchrony and successful social interaction.
Collapse
Affiliation(s)
- Katarzyna Gugnowska
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| | - Giacomo Novembre
- Neuroscience of Perception and Action Lab, Italian Institute of Technology (IIT), Rome 00161, Italy
| | - Natalie Kohler
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| | - Arno Villringer
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Peter E Keller
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University, Aarhus 8000, Denmark
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW 2751, Australia
| | - Daniela Sammler
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| |
Collapse
|
8
|
Gande N. Neural Phenomenon in Musicality: The Interpretation of Dual-Processing Modes in Melodic Perception. Front Hum Neurosci 2022; 16:823325. [PMID: 35496061 PMCID: PMC9051476 DOI: 10.3389/fnhum.2022.823325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Accepted: 02/24/2022] [Indexed: 11/13/2022] Open
Abstract
The confluence of creativity in music performance finds itself in performance practices and cultural motifs, the communication of the human body along with the instrument it interacts with, and individual performers' perceptual, motor, and cognitive abilities that contribute to varied musical interpretations of the same piece or melodic line. The musical and artistic execution of a player, as well as the product of this phenomena can become determinant causes in a creative mental state. With advances in neurocognitive measures, the state of one's artistic intuition and execution has been a growing interest in understanding the creative thought process of human behavior, particularly in improvising artists. This article discusses the implementation on the concurrence of spontaneous (Type-1) and controlled (Type-2) processing modes that may be apparent in the perception of non-improvising artists on how melodic lines are perceived in music performance. Elucidating the cortical-subcortical activity in the dual-process model may extend to non-improvising musicians explored in the paradigm of neural correlates. These interactions may open new possibilities for expanding the repertoire of executive functions, creativity, and the coordinated activity of cortical-subcortical regions that regulate the free flow of artistic ideas and expressive spontaneity in future neuromusical research.
Collapse
Affiliation(s)
- Nathazsha Gande
- Department of A-Levels, HELP University, Kuala Lumpur, Malaysia
| |
Collapse
|
9
|
Ylinen A, Wikman P, Leminen M, Alho K. Task-dependent cortical activations during selective attention to audiovisual speech. Brain Res 2022; 1775:147739. [PMID: 34843702 DOI: 10.1016/j.brainres.2021.147739] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 10/21/2021] [Accepted: 11/21/2021] [Indexed: 11/28/2022]
Abstract
Selective listening to speech depends on widespread networks of the brain, but how the involvement of different neural systems in speech processing is affected by factors such as the task performed by a listener and speech intelligibility remains poorly understood. We used functional magnetic resonance imaging to systematically examine the effects that performing different tasks has on neural activations during selective attention to continuous audiovisual speech in the presence of task-irrelevant speech. Participants viewed audiovisual dialogues and attended either to the semantic or the phonological content of speech, or ignored speech altogether and performed a visual control task. The tasks were factorially combined with good and poor auditory and visual speech qualities. Selective attention to speech engaged superior temporal regions and the left inferior frontal gyrus regardless of the task. Frontoparietal regions implicated in selective auditory attention to simple sounds (e.g., tones, syllables) were not engaged by the semantic task, suggesting that this network may not be not as crucial when attending to continuous speech. The medial orbitofrontal cortex, implicated in social cognition, was most activated by the semantic task. Activity levels during the phonological task in the left prefrontal, premotor, and secondary somatosensory regions had a distinct temporal profile as well as the highest overall activity, possibly relating to the role of the dorsal speech processing stream in sub-lexical processing. Our results demonstrate that the task type influences neural activations during selective attention to speech, and emphasize the importance of ecologically valid experimental designs.
Collapse
Affiliation(s)
- Artturi Ylinen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.
| | - Patrik Wikman
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Department of Neuroscience, Georgetown University, Washington D.C., USA
| | - Miika Leminen
- Analytics and Data Services, HUS Helsinki University Hospital, Helsinki, Finland
| | - Kimmo Alho
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
10
|
Finkl T, Hahne A, Friederici AD, Gerber J, Mürbe D, Anwander A. Language Without Speech: Segregating Distinct Circuits in the Human Brain. Cereb Cortex 2021; 30:812-823. [PMID: 31373629 DOI: 10.1093/cercor/bhz128] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Revised: 05/08/2019] [Accepted: 05/20/2019] [Indexed: 01/09/2023] Open
Abstract
Language is a fundamental part of human cognition. The question of whether language is processed independently of speech, however, is still heavily discussed. The absence of speech in deaf signers offers the opportunity to disentangle language from speech in the human brain. Using probabilistic tractography, we compared brain structural connectivity of adult deaf signers who had learned sign language early in life to that of matched hearing controls. Quantitative comparison of the connectivity profiles revealed that the core language tracts did not differ between signers and controls, confirming that language is independent of speech. In contrast, pathways involved in the production and perception of speech displayed lower connectivity in deaf signers compared to hearing controls. These differences were located in tracts towards the left pre-supplementary motor area and the thalamus when seeding in Broca's area, and in ipsilateral parietal areas and the precuneus with seeds in left posterior temporal regions. Furthermore, the interhemispheric connectivity between the auditory cortices was lower in the deaf than in the hearing group, underlining the importance of the transcallosal connection for early auditory processes. The present results provide evidence for a functional segregation of the neural pathways for language and speech.
Collapse
Affiliation(s)
- Theresa Finkl
- Saxonian Cochlear Implant Centre, Phoniatrics and Audiology, Faculty of Medicine, Technische Universität Dresden, Fetscherstraße 74, Dresden, Germany
| | - Anja Hahne
- Saxonian Cochlear Implant Centre, Phoniatrics and Audiology, Faculty of Medicine, Technische Universität Dresden, Fetscherstraße 74, Dresden, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Johannes Gerber
- Neuroradiology, Faculty of Medicine, Technische Universität Dresden, Dresden, Germany
| | - Dirk Mürbe
- Department of Audiology and Phoniatrics, Charité-Universitätsmedizin, Berlin, Germany
| | - Alfred Anwander
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
11
|
Mamashli F, Huang S, Khan S, Hämäläinen MS, Ahlfors SP, Ahveninen J. Distinct Regional Oscillatory Connectivity Patterns During Auditory Target and Novelty Processing. Brain Topogr 2020; 33:477-488. [PMID: 32441009 DOI: 10.1007/s10548-020-00776-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Accepted: 05/12/2020] [Indexed: 11/26/2022]
Abstract
Auditory attention allows us to focus on relevant target sounds in the acoustic environment while maintaining the capability to orient to unpredictable (novel) sound changes. An open question is whether orienting to expected vs. unexpected auditory events are governed by anatomically distinct attention pathways, respectively, or by differing communication patterns within a common system. To address this question, we applied a recently developed PeSCAR analysis method to evaluate spectrotemporal functional connectivity patterns across subregions of broader cortical regions of interest (ROIs) to analyze magnetoencephalography data obtained during a cued auditory attention task. Subjects were instructed to detect a predictable harmonic target sound embedded among standard tones in one ear and to ignore the standard tones and occasional unpredictable novel sounds presented in the opposite ear. Phase coherence of estimated source activity was calculated between subregions of superior temporal, frontal, inferior parietal, and superior parietal cortex ROIs. Functional connectivity was stronger in response to target than novel stimuli between left superior temporal and left parietal ROIs and between left frontal and right parietal ROIs, with the largest effects observed in the beta band (15-35 Hz). In contrast, functional connectivity was stronger in response to novel than target stimuli in inter-hemispheric connections between left and right frontal ROIs, observed in early time windows in the alpha band (8-12 Hz). Our findings suggest that auditory processing of expected target vs. unexpected novel sounds involves different spatially, temporally, and spectrally distributed oscillatory connectivity patterns across temporal, parietal, and frontal areas.
Collapse
Affiliation(s)
- Fahimeh Mamashli
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, 02129, USA.
| | - Samantha Huang
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, 02129, USA
| | - Sheraz Khan
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, 02129, USA
| | - Matti S Hämäläinen
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, 02129, USA
| | - Seppo P Ahlfors
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, 02129, USA
| | - Jyrki Ahveninen
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, 02129, USA
| |
Collapse
|
12
|
Leminen A, Verwoert M, Moisala M, Salmela V, Wikman P, Alho K. Modulation of Brain Activity by Selective Attention to Audiovisual Dialogues. Front Neurosci 2020; 14:436. [PMID: 32477054 PMCID: PMC7235384 DOI: 10.3389/fnins.2020.00436] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Accepted: 04/09/2020] [Indexed: 01/08/2023] Open
Abstract
In real-life noisy situations, we can selectively attend to conversations in the presence of irrelevant voices, but neurocognitive mechanisms in such natural listening situations remain largely unexplored. Previous research has shown distributed activity in the mid superior temporal gyrus (STG) and sulcus (STS) while listening to speech and human voices, in the posterior STS and fusiform gyrus when combining auditory, visual and linguistic information, as well as in left-hemisphere temporal and frontal cortical areas during comprehension. In the present functional magnetic resonance imaging (fMRI) study, we investigated how selective attention modulates neural responses to naturalistic audiovisual dialogues. Our healthy adult participants (N = 15) selectively attended to video-taped dialogues between a man and woman in the presence of irrelevant continuous speech in the background. We modulated the auditory quality of dialogues with noise vocoding and their visual quality by masking speech-related facial movements. Both increased auditory quality and increased visual quality were associated with bilateral activity enhancements in the STG/STS. In addition, decreased audiovisual stimulus quality elicited enhanced fronto-parietal activity, presumably reflecting increased attentional demands. Finally, attention to the dialogues, in relation to a control task where a fixation cross was attended and the dialogue ignored, yielded enhanced activity in the left planum polare, angular gyrus, the right temporal pole, as well as in the orbitofrontal/ventromedial prefrontal cortex and posterior cingulate gyrus. Our findings suggest that naturalistic conversations effectively engage participants and reveal brain networks related to social perception in addition to speech and semantic processing networks.
Collapse
Affiliation(s)
- Alina Leminen
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Cognitive Science, Department of Digital Humanities, Helsinki Centre for Digital Humanities (Heldig), University of Helsinki, Helsinki, Finland
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Center for Cognition and Decision Making, Institute of Cognitive Neuroscience, National Research University – Higher School of Economics, Moscow, Russia
| | - Maxime Verwoert
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Mona Moisala
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Viljami Salmela
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Patrik Wikman
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Kimmo Alho
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
13
|
Shestopalova LB, Petropavlovskaia EA, Semenova VV, Nikitin NI. Lateralization of brain responses to auditory motion: A study using single-trial analysis. Neurosci Res 2020; 162:31-44. [PMID: 32001322 DOI: 10.1016/j.neures.2020.01.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Revised: 12/17/2019] [Accepted: 01/10/2020] [Indexed: 11/19/2022]
Abstract
The present study investigates hemispheric asymmetry of the ERPs and low-frequency oscillatory responses evoked in both hemispheres of the brain by the sound stimuli with delayed onset of motion. EEG was recorded for three patterns of sound motion produced by changes in interaural time differences. Event-related spectral perturbation (ERSP) and inter-trial phase coherence (ITC) were computed from the time-frequency decomposition of EEG signals. The participants either read books of their choice (passive listening) or indicated the sound trajectories perceived using a graphic tablet (active listening). Our goal was to find out whether the lateralization of the motion-onset response (MOR) and oscillatory responses to sound motion were more consistent with the right-hemispheric dominance, contralateral or neglect model of interhemispheric asymmetry. Apparent dominance of the right hemisphere was found only in the ERSP responses. Stronger contralaterality of the left hemisphere corresponding to the "neglect model" of asymmetry was shown by the MOR components and by the phase coherence of the delta-alpha oscillations. Velocity and attention did not change consistently the interhemispheric asymmetry of both the MOR and the oscillatory responses. Our findings demonstrate how the lateralization pattern shown by the MOR potential was interrelated with that of the motion-related single-trial measures.
Collapse
Affiliation(s)
- L B Shestopalova
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| | - E A Petropavlovskaia
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| | - V V Semenova
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| | - N I Nikitin
- Pavlov Institute of Physiology, Russian Academy of Sciences 199034, Makarova emb., 6, St. Petersburg, Russia.
| |
Collapse
|
14
|
Nani A, Manuello J, Mancuso L, Liloia D, Costa T, Cauda F. The Neural Correlates of Consciousness and Attention: Two Sister Processes of the Brain. Front Neurosci 2019; 13:1169. [PMID: 31749675 PMCID: PMC6842945 DOI: 10.3389/fnins.2019.01169] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Accepted: 10/16/2019] [Indexed: 12/30/2022] Open
Abstract
During the last three decades our understanding of the brain processes underlying consciousness and attention has significantly improved, mainly because of the advances in functional neuroimaging techniques. Still, caution is needed for the correct interpretation of these empirical findings, as both research and theoretical proposals are hampered by a number of conceptual difficulties. We review some of the most significant theoretical issues concerning the concepts of consciousness and attention in the neuroscientific literature, and put forward the implications of these reflections for a coherent model of the neural correlates of these brain functions. Even though consciousness and attention have an overlapping pattern of neural activity, they should be considered as essentially separate brain processes. The contents of phenomenal consciousness are supposed to be associated with the activity of multiple synchronized networks in the temporo-parietal-occipital areas. Only subsequently, attention, supported by fronto-parietal networks, enters the process of consciousness to provide focal awareness of specific features of reality.
Collapse
Affiliation(s)
- Andrea Nani
- Focus Lab, Department of Psychology, University of Turin, Turin, Italy
- GCS-FMRI, Koelliker Hospital and Department of Psychology, University of Turin, Turin, Italy
| | - Jordi Manuello
- Focus Lab, Department of Psychology, University of Turin, Turin, Italy
- GCS-FMRI, Koelliker Hospital and Department of Psychology, University of Turin, Turin, Italy
| | - Lorenzo Mancuso
- Focus Lab, Department of Psychology, University of Turin, Turin, Italy
- GCS-FMRI, Koelliker Hospital and Department of Psychology, University of Turin, Turin, Italy
| | - Donato Liloia
- Focus Lab, Department of Psychology, University of Turin, Turin, Italy
- GCS-FMRI, Koelliker Hospital and Department of Psychology, University of Turin, Turin, Italy
| | - Tommaso Costa
- Focus Lab, Department of Psychology, University of Turin, Turin, Italy
- GCS-FMRI, Koelliker Hospital and Department of Psychology, University of Turin, Turin, Italy
- Neuroscience Institute of Turin, University of Turin, Turin, Italy
| | - Franco Cauda
- Focus Lab, Department of Psychology, University of Turin, Turin, Italy
- GCS-FMRI, Koelliker Hospital and Department of Psychology, University of Turin, Turin, Italy
- Neuroscience Institute of Turin, University of Turin, Turin, Italy
| |
Collapse
|
15
|
Ozmeral EJ, Eddins DA, Eddins AC. Electrophysiological responses to lateral shifts are not consistent with opponent-channel processing of interaural level differences. J Neurophysiol 2019; 122:737-748. [PMID: 31242052 DOI: 10.1152/jn.00090.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Cortical encoding of auditory space relies on two major peripheral cues, interaural time difference (ITD) and interaural level difference (ILD) of the sounds arriving at a listener's ears. In much of the precortical auditory pathway, ITD and ILD cues are processed independently, and it is assumed that cue integration is a higher order process. However, there remains debate on how ITDs and ILDs are encoded in the cortex and whether they share a common mechanism. The present study used electroencephalography (EEG) to measure evoked cortical potentials from narrowband noise stimuli with imposed binaural cue changes. Previous studies have similarly tested ITD shifts to demonstrate that neural populations broadly favor one spatial hemifield over the other, which is consistent with an opponent-channel model that computes the relative activity between broadly tuned neural populations. However, it is still a matter of debate whether the same coding scheme applies to ILDs and, if so, whether processing the two binaural cues is distributed across similar regions of the cortex. The results indicate that ITD and ILD cues have similar neural signatures with respect to the monotonic responses to shift magnitude; however, the direction of the shift did not elicit responses equally across cues. Specifically, ITD shifts evoked greater responses for outward than inward shifts, independently of the spatial hemifield of the shift, whereas ILD-shift responses were dependent on the hemifield in which the shift occurred. Active cortical structures showed only minor overlap between responses to cues, suggesting the two are not represented by the same pathway.NEW & NOTEWORTHY Interaural time differences (ITDs) and interaural level differences (ILDs) are critical to locating auditory sources in the horizontal plane. The higher order perceptual feature of auditory space is thought to be encoded together by these binaural differences, yet evidence of their integration in cortex remains elusive. Although present results show some common effects between the two cues, key differences were observed that are not consistent with an ITD-like opponent-channel process for ILD encoding.
Collapse
Affiliation(s)
- Erol J Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| | - David A Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida.,Department of Chemical and Biomedical Engineering, University of South Florida, Tampa, Florida
| | - Ann Clock Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| |
Collapse
|
16
|
What's what in auditory cortices? Neuroimage 2018; 176:29-40. [DOI: 10.1016/j.neuroimage.2018.04.028] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Revised: 04/04/2018] [Accepted: 04/12/2018] [Indexed: 11/30/2022] Open
|
17
|
Zhang M, Mary Ying YL, Ihlefeld A. Spatial Release From Informational Masking: Evidence From Functional Near Infrared Spectroscopy. Trends Hear 2018; 22:2331216518817464. [PMID: 30558491 PMCID: PMC6299332 DOI: 10.1177/2331216518817464] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2018] [Revised: 10/31/2018] [Accepted: 11/13/2018] [Indexed: 11/30/2022] Open
Abstract
Informational masking (IM) can greatly reduce speech intelligibility, but the neural mechanisms underlying IM are not understood. Binaural differences between target and masker can improve speech perception. In general, improvement in masked speech intelligibility due to provision of spatial cues is called spatial release from masking. Here, we focused on an aspect of spatial release from masking, specifically, the role of spatial attention. We hypothesized that in a situation with IM background sound (a) attention to speech recruits lateral frontal cortex (LFCx) and (b) LFCx activity varies with direction of spatial attention. Using functional near infrared spectroscopy, we assessed LFCx activity bilaterally in normal-hearing listeners. In Experiment 1, two talkers were simultaneously presented. Listeners either attended to the target talker (speech task) or they listened passively to an unintelligible, scrambled version of the acoustic mixture (control task). Target and masker differed in pitch and interaural time difference (ITD). Relative to the passive control, LFCx activity increased during attentive listening. Experiment 2 measured how LFCx activity varied with ITD, by testing listeners on the speech task in Experiment 1, except that talkers either were spatially separated by ITD or colocated. Results show that directing of auditory attention activates LFCx bilaterally. Moreover, right LFCx is recruited more strongly in the spatially separated as compared with colocated configurations. Findings hint that LFCx function contributes to spatial release from masking in situations with IM.
Collapse
Affiliation(s)
- Min Zhang
- Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ, USA
- Graduate School of Biomedical Sciences, Rutgers University, Newark, NJ, USA
| | - Yu-Lan Mary Ying
- Department of Otolaryngology-Head and Neck Surgery, Rutgers New Jersey Medical School, Newark, NJ, USA
| | - Antje Ihlefeld
- Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ, USA
| |
Collapse
|
18
|
Zamorano AM, Cifre I, Montoya P, Riquelme I, Kleber B. Insula-based networks in professional musicians: Evidence for increased functional connectivity during resting state fMRI. Hum Brain Mapp 2017; 38:4834-4849. [PMID: 28737256 DOI: 10.1002/hbm.23682] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2017] [Revised: 05/24/2017] [Accepted: 05/26/2017] [Indexed: 12/22/2022] Open
Abstract
Despite considerable research on experience-dependent neuroplasticity in professional musicians, detailed understanding of an involvement of the insula is only now beginning to emerge. We investigated the effects of musical training on intrinsic insula-based connectivity in professional classical musicians relative to nonmusicians using resting-state functional MRI. Following a tripartite scheme of insula subdivisions, coactivation profiles were analyzed for the posterior, ventral anterior, and dorsal anterior insula in both hemispheres. While whole-brain connectivity across all participants confirmed previously reported patterns, between-group comparisons revealed increased insular connectivity in musicians relative to nonmusicians. Coactivated regions encompassed constituents of large-scale networks involved in salience detection (e.g., anterior and middle cingulate cortex), affective processing (e.g., orbitofrontal cortex and temporal pole), and higher order cognition (e.g., dorsolateral prefrontal cortex and the temporoparietal junction), whereas no differences were found for the reversed group contrast. Importantly, these connectivity patterns were stronger in musicians who experienced more years of musical practice, including also sensorimotor regions involved in music performance (M1 hand area, S1, A1, and SMA). We conclude that musical training triggers significant reorganization in insula-based networks, potentially facilitating high-level cognitive and affective functions associated with the fast integration of multisensory information in the context of music performance. Hum Brain Mapp 38:4834-4849, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Anna M Zamorano
- Research Institute of Health Sciences (IUNICS-IdISBa), University of the Balearic Islands, Palma de Mallorca, Spain
| | - Ignacio Cifre
- University Ramon Llull, Blanquerna, FPCEE, Barcelona, Spain
| | - Pedro Montoya
- Research Institute of Health Sciences (IUNICS-IdISBa), University of the Balearic Islands, Palma de Mallorca, Spain
| | - Inmaculada Riquelme
- Research Institute of Health Sciences (IUNICS-IdISBa), University of the Balearic Islands, Palma de Mallorca, Spain.,Department of Nursing and Physiotherapy, University of the Balearic Islands, Palma de Mallorca, Spain
| | - Boris Kleber
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Denmark.,Institute of Medical Psychology and Behavioral Neurobiology, University of Tübingen, Tübingen, Germany
| |
Collapse
|
19
|
Saltuklaroglu T, Harkrider AW, Thornton D, Jenson D, Kittilstved T. EEG Mu (µ) rhythm spectra and oscillatory activity differentiate stuttering from non-stuttering adults. Neuroimage 2017; 153:232-245. [PMID: 28400266 PMCID: PMC5569894 DOI: 10.1016/j.neuroimage.2017.04.022] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2016] [Revised: 01/24/2017] [Accepted: 04/08/2017] [Indexed: 10/19/2022] Open
Abstract
Stuttering is linked to sensorimotor deficits related to internal modeling mechanisms. This study compared spectral power and oscillatory activity of EEG mu (μ) rhythms between persons who stutter (PWS) and controls in listening and auditory discrimination tasks. EEG data were analyzed from passive listening in noise and accurate (same/different) discrimination of tones or syllables in quiet and noisy backgrounds. Independent component analysis identified left and/or right μ rhythms with characteristic alpha (α) and beta (β) peaks localized to premotor/motor regions in 23 of 27 people who stutter (PWS) and 24 of 27 controls. PWS produced μ spectra with reduced β amplitudes across conditions, suggesting reduced forward modeling capacity. Group time-frequency differences were associated with noisy conditions only. PWS showed increased μ-β desynchronization when listening to noise and early in discrimination events, suggesting evidence of heightened motor activity that might be related to forward modeling deficits. PWS also showed reduced μ-α synchronization in discrimination conditions, indicating reduced sensory gating. Together these findings indicate spectral and oscillatory analyses of μ rhythms are sensitive to stuttering. More specifically, they can reveal stuttering-related sensorimotor processing differences in listening and auditory discrimination that also may be influenced by basal ganglia deficits.
Collapse
Affiliation(s)
- Tim Saltuklaroglu
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| | - Ashley W Harkrider
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA.
| | - David Thornton
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| | - David Jenson
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| | - Tiffani Kittilstved
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| |
Collapse
|
20
|
Abstract
Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information-a phenomenon referred to as the 'cocktail party problem'. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by 'bottom-up' sensory-driven factors, as well as 'top-down' task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes.This article is part of the themed issue 'Auditory and visual scene analysis'.
Collapse
Affiliation(s)
- Emine Merve Kaya
- Laboratory for Computational Audio Perception, Department of Electrical and Computer Engineering, The Johns Hopkins University, 3400 N Charles Street, Barton Hall, Baltimore, MD 21218, USA
| | - Mounya Elhilali
- Laboratory for Computational Audio Perception, Department of Electrical and Computer Engineering, The Johns Hopkins University, 3400 N Charles Street, Barton Hall, Baltimore, MD 21218, USA
| |
Collapse
|
21
|
Shestopalova L, Petropavlovskaia E, Vaitulevich S, Nikitin N. Hemispheric asymmetry of ERPs and MMNs evoked by slow, fast and abrupt auditory motion. Neuropsychologia 2016; 91:465-479. [DOI: 10.1016/j.neuropsychologia.2016.09.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2016] [Revised: 08/25/2016] [Accepted: 09/13/2016] [Indexed: 10/21/2022]
|
22
|
Zündorf IC, Lewald J, Karnath HO. Testing the dual-pathway model for auditory processing in human cortex. Neuroimage 2015; 124:672-681. [PMID: 26388552 DOI: 10.1016/j.neuroimage.2015.09.026] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2015] [Revised: 09/09/2015] [Accepted: 09/10/2015] [Indexed: 11/16/2022] Open
Abstract
Analogous to the visual system, auditory information has been proposed to be processed in two largely segregated streams: an anteroventral ("what") pathway mainly subserving sound identification and a posterodorsal ("where") stream mainly subserving sound localization. Despite the popularity of this assumption, the degree of separation of spatial and non-spatial auditory information processing in cortex is still under discussion. In the present study, a statistical approach was implemented to investigate potential behavioral dissociations for spatial and non-spatial auditory processing in stroke patients, and voxel-wise lesion analyses were used to uncover their neural correlates. The results generally provided support for anatomically and functionally segregated auditory networks. However, some degree of anatomo-functional overlap between "what" and "where" aspects of processing was found in the superior pars opercularis of right inferior frontal gyrus (Brodmann area 44), suggesting the potential existence of a shared target area of both auditory streams in this region. Moreover, beyond the typically defined posterodorsal stream (i.e., posterior superior temporal gyrus, inferior parietal lobule, and superior frontal sulcus), occipital lesions were found to be associated with sound localization deficits. These results, indicating anatomically and functionally complex cortical networks for spatial and non-spatial auditory processing, are roughly consistent with the dual-pathway model of auditory processing in its original form, but argue for the need to refine and extend this widely accepted hypothesis.
Collapse
Affiliation(s)
- Ida C Zündorf
- Center of Neurology, Division of Neuropsychology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, Bochum, Germany; Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Hans-Otto Karnath
- Center of Neurology, Division of Neuropsychology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany; Department of Psychology, University of South Carolina, Columbia, SC 29208, USA.
| |
Collapse
|
23
|
Amaral AA, Langers DR. Tinnitus-related abnormalities in visual and salience networks during a one-back task with distractors. Hear Res 2015; 326:15-29. [DOI: 10.1016/j.heares.2015.03.006] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2014] [Revised: 02/23/2015] [Accepted: 03/16/2015] [Indexed: 01/11/2023]
|
24
|
Sugiura L, Ojima S, Matsuba-Kurita H, Dan I, Tsuzuki D, Katura T, Hagiwara H. Effects of sex and proficiency in second language processing as revealed by a large-scale fNIRS study of school-aged children. Hum Brain Mapp 2015; 36:3890-911. [PMID: 26147179 DOI: 10.1002/hbm.22885] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2014] [Revised: 06/11/2015] [Accepted: 06/12/2015] [Indexed: 11/08/2022] Open
Abstract
Previous neuroimaging studies in adults have revealed that first and second languages (L1/L2) share similar neural substrates, and that proficiency is a major determinant of the neural organization of L2 in the lexical-semantic and syntactic domains. However, little is known about neural substrates of children in the phonological domain, or about sex differences. Here, we conducted a large-scale study (n = 484) of school-aged children using functional near-infrared spectroscopy and a word repetition task, which requires a great extent of phonological processing. We investigated cortical activation during word processing, emphasizing sex differences, to clarify similarities and differences between L1 and L2, and proficiency-related differences during early L2 learning. L1 and L2 shared similar neural substrates with decreased activation in L2 compared to L1 in the posterior superior/middle temporal and angular/supramarginal gyri for both sexes. Significant sex differences were found in cortical activation within language areas during high-frequency word but not during low-frequency word processing. During high-frequency word processing, widely distributed areas including the angular/supramarginal gyri were activated in boys, while more restricted areas, excluding the angular/supramarginal gyri were activated in girls. Significant sex differences were also found in L2 proficiency-related activation: activation significantly increased with proficiency in boys, whereas no proficiency-related differences were found in girls. Importantly, cortical sex differences emerged with proficiency. Based on previous research, the present results indicate that sex differences are acquired or enlarged during language development through different cognitive strategies between sexes, possibly reflecting their different memory functions.
Collapse
Affiliation(s)
- Lisa Sugiura
- Department of Language Sciences, Graduate School of Humanities, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji, Tokyo, 192-0397, Japan.,Research Institute of Science and Technology for Society (RISTEX), Japan Science and Technology Agency (JST), Niban-Cho, Chiyoda-Ku, Tokyo, 100-0004, Japan.,Research Center for Language, Brain and Genetics, Tokyo Metropolitan University
| | - Shiro Ojima
- Department of Language Sciences, Graduate School of Humanities, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji, Tokyo, 192-0397, Japan.,Research Institute of Science and Technology for Society (RISTEX), Japan Science and Technology Agency (JST), Niban-Cho, Chiyoda-Ku, Tokyo, 100-0004, Japan
| | - Hiroko Matsuba-Kurita
- Research Institute of Science and Technology for Society (RISTEX), Japan Science and Technology Agency (JST), Niban-Cho, Chiyoda-Ku, Tokyo, 100-0004, Japan
| | - Ippeita Dan
- Applied Cognitive Neuroscience Lab, Faculty of Science and Engineering, Chuo University, 1-13-27 Kasuga, Bunkyo-Ku, Tokyo, 112-8551, Japan
| | - Daisuke Tsuzuki
- Applied Cognitive Neuroscience Lab, Faculty of Science and Engineering, Chuo University, 1-13-27 Kasuga, Bunkyo-Ku, Tokyo, 112-8551, Japan.,Information Science and Technology Department, National Institute of Technology, Yuge College, 1000 Shimoyuge, Yuge, Kamijima-cho, Ochi-gun, Ehime, 794-2593, Japan
| | - Takusige Katura
- Center for Exploratory Research, Research & Development Group, Hitachi, Ltd., Hatoyama, Saitama, 350-0395, Japan
| | - Hiroko Hagiwara
- Department of Language Sciences, Graduate School of Humanities, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji, Tokyo, 192-0397, Japan.,Research Institute of Science and Technology for Society (RISTEX), Japan Science and Technology Agency (JST), Niban-Cho, Chiyoda-Ku, Tokyo, 100-0004, Japan.,Research Center for Language, Brain and Genetics, Tokyo Metropolitan University
| |
Collapse
|
25
|
Mock JR, Seay MJ, Charney DR, Holmes JL, Golob EJ. Rapid cortical dynamics associated with auditory spatial attention gradients. Front Neurosci 2015; 9:179. [PMID: 26082679 PMCID: PMC4451343 DOI: 10.3389/fnins.2015.00179] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2015] [Accepted: 05/04/2015] [Indexed: 11/13/2022] Open
Abstract
Behavioral and EEG studies suggest spatial attention is allocated as a gradient in which processing benefits decrease away from an attended location. Yet the spatiotemporal dynamics of cortical processes that contribute to attentional gradients are unclear. We measured EEG while participants (n = 35) performed an auditory spatial attention task that required a button press to sounds at one target location on either the left or right. Distractor sounds were randomly presented at four non-target locations evenly spaced up to 180° from the target location. Attentional gradients were quantified by regressing ERP amplitudes elicited by distractors against their spatial location relative to the target. Independent component analysis was applied to each subject's scalp channel data, allowing isolation of distinct cortical sources. Results from scalp ERPs showed a tri-phasic response with gradient slope peaks at ~300 ms (frontal, positive), ~430 ms (posterior, negative), and a plateau starting at ~550 ms (frontal, positive). Corresponding to the first slope peak, a positive gradient was found within a central component when attending to both target locations and for two lateral frontal components when contralateral to the target location. Similarly, a central posterior component had a negative gradient that corresponded to the second slope peak regardless of target location. A right posterior component had both an ipsilateral followed by a contralateral gradient. Lateral posterior clusters also had decreases in α and β oscillatory power with a negative slope and contralateral tuning. Only the left posterior component (120-200 ms) corresponded to absolute sound location. The findings indicate a rapid, temporally-organized sequence of gradients thought to reflect interplay between frontal and parietal regions. We conclude these gradients support a target-based saliency map exhibiting aspects of both right-hemisphere dominance and opponent process models.
Collapse
Affiliation(s)
- Jeffrey R Mock
- Department of Psychology, Tulane University New Orleans, LA, USA
| | - Michael J Seay
- Department of Psychology, Tulane University New Orleans, LA, USA
| | | | - John L Holmes
- Department of Psychology, Tulane University New Orleans, LA, USA
| | - Edward J Golob
- Department of Psychology, Tulane University New Orleans, LA, USA ; Program in Neuroscience, Tulane University New Orleans, LA, USA ; Program in Aging, Tulane University New Orleans, LA, USA
| |
Collapse
|
26
|
Neural dynamics underlying attentional orienting to auditory representations in short-term memory. J Neurosci 2015; 35:1307-18. [PMID: 25609643 DOI: 10.1523/jneurosci.1487-14.2015] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Sounds are ephemeral. Thus, coherent auditory perception depends on "hearing" back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed feature-general top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics.
Collapse
|
27
|
Spagna A, Mackie MA, Fan J. Supramodal executive control of attention. Front Psychol 2015; 6:65. [PMID: 25759674 PMCID: PMC4338659 DOI: 10.3389/fpsyg.2015.00065] [Citation(s) in RCA: 57] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2014] [Accepted: 01/13/2015] [Indexed: 11/13/2022] Open
Abstract
The human attentional system can be subdivided into three functional networks of alerting, orienting, and executive control. Although these networks have been extensively studied in the visuospatial modality, whether the same mechanisms are deployed across different sensory modalities remains unclear. In this study we used the attention network test for the visuospatial modality, in addition to two auditory variants with spatial and frequency manipulations to examine cross-modal correlations between network functions. Results showed that among the visual and auditory tasks, the effects of executive control, but not effects of alerting and orienting, were significantly correlated. These findings suggest that while alerting and orienting functions rely more upon modality-specific processes, the executive control of attention coordinates complex behavior via supramodal mechanisms.
Collapse
Affiliation(s)
- Alfredo Spagna
- Department of Psychology, Queens College, City University of New York, New York, NY USA
| | - Melissa-Ann Mackie
- Department of Psychology, Queens College, City University of New York, New York, NY USA ; The Graduate Center, City University of New York, New York, NY USA
| | - Jin Fan
- Department of Psychology, Queens College, City University of New York, New York, NY USA ; The Graduate Center, City University of New York, New York, NY USA ; Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY USA ; Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY USA
| |
Collapse
|
28
|
Abstract
The auditory system derives locations of sound sources from spatial cues provided by the interaction of sound with the head and external ears. Those cues are analyzed in specific brainstem pathways and then integrated as cortical representation of locations. The principal cues for horizontal localization are interaural time differences (ITDs) and interaural differences in sound level (ILDs). Vertical and front/back localization rely on spectral-shape cues derived from direction-dependent filtering properties of the external ears. The likely first sites of analysis of these cues are the medial superior olive (MSO) for ITDs, lateral superior olive (LSO) for ILDs, and dorsal cochlear nucleus (DCN) for spectral-shape cues. Localization in distance is much less accurate than that in horizontal and vertical dimensions, and interpretation of the basic cues is influenced by additional factors, including acoustics of the surroundings and familiarity of source spectra and levels. Listeners are quite sensitive to sound motion, but it remains unclear whether that reflects specific motion detection mechanisms or simply detection of changes in static location. Intact auditory cortex is essential for normal sound localization. Cortical representation of sound locations is highly distributed, with no evidence for point-to-point topography. Spatial representation is strictly contralateral in laboratory animals that have been studied, whereas humans show a prominent right-hemisphere dominance.
Collapse
Affiliation(s)
- John C Middlebrooks
- Departments of Otolaryngology, Neurobiology and Behavior, Cognitive Sciences, and Biomedical Engineering, University of California at Irvine, Irvine, CA, USA.
| |
Collapse
|
29
|
Wisniewski MG, Mercado E, Church BA, Gramann K, Makeig S. Brain dynamics that correlate with effects of learning on auditory distance perception. Front Neurosci 2014; 8:396. [PMID: 25538550 PMCID: PMC4260497 DOI: 10.3389/fnins.2014.00396] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2014] [Accepted: 11/18/2014] [Indexed: 11/18/2022] Open
Abstract
Accuracy in auditory distance perception can improve with practice and varies for sounds differing in familiarity. Here, listeners were trained to judge the distances of English, Bengali, and backwards speech sources pre-recorded at near (2-m) and far (30-m) distances. Listeners' accuracy was tested before and after training. Improvements from pre-test to post-test were greater for forward speech, demonstrating a learning advantage for forward speech sounds. Independent component (IC) processes identified in electroencephalographic (EEG) data collected during pre- and post-testing revealed three clusters of ICs across subjects with stimulus-locked spectral perturbations related to learning and accuracy. One cluster exhibited a transient stimulus-locked increase in 4–8 Hz power (theta event-related synchronization; ERS) that was smaller after training and largest for backwards speech. For a left temporal cluster, 8–12 Hz decreases in power (alpha event-related desynchronization; ERD) were greatest for English speech and less prominent after training. In contrast, a cluster of IC processes centered at or near anterior portions of the medial frontal cortex showed learning-related enhancement of sustained increases in 10–16 Hz power (upper-alpha/low-beta ERS). The degree of this enhancement was positively correlated with the degree of behavioral improvements. Results suggest that neural dynamics in non-auditory cortical areas support distance judgments. Further, frontal cortical networks associated with attentional and/or working memory processes appear to play a role in perceptual learning for source distance.
Collapse
Affiliation(s)
- Matthew G Wisniewski
- 711th Human Performance Wing, U. S. Air Force Research Laboratory Wright-Patterson Air Force Base, OH, USA ; Department of Psychology, University at Buffalo, The State University of New York Buffalo, NY, USA
| | - Eduardo Mercado
- Department of Psychology, University at Buffalo, The State University of New York Buffalo, NY, USA
| | - Barbara A Church
- Department of Psychology, University at Buffalo, The State University of New York Buffalo, NY, USA
| | - Klaus Gramann
- Biological Psychology and Neuroergonomics, Berlin Institute of Technology Berlin, Germany
| | - Scott Makeig
- Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California, San Diego San Diego, CA, USA
| |
Collapse
|
30
|
Spada D, Verga L, Iadanza A, Tettamanti M, Perani D. The auditory scene: An fMRI study on melody and accompaniment in professional pianists. Neuroimage 2014; 102 Pt 2:764-75. [DOI: 10.1016/j.neuroimage.2014.08.036] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2014] [Revised: 06/13/2014] [Accepted: 08/20/2014] [Indexed: 11/17/2022] Open
|
31
|
Fujii S, Wan CY. The Role of Rhythm in Speech and Language Rehabilitation: The SEP Hypothesis. Front Hum Neurosci 2014; 8:777. [PMID: 25352796 PMCID: PMC4195275 DOI: 10.3389/fnhum.2014.00777] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2014] [Accepted: 09/12/2014] [Indexed: 11/16/2022] Open
Abstract
For thousands of years, human beings have engaged in rhythmic activities such as drumming, dancing, and singing. Rhythm can be a powerful medium to stimulate communication and social interactions, due to the strong sensorimotor coupling. For example, the mere presence of an underlying beat or pulse can result in spontaneous motor responses such as hand clapping, foot stepping, and rhythmic vocalizations. Examining the relationship between rhythm and speech is fundamental not only to our understanding of the origins of human communication but also in the treatment of neurological disorders. In this paper, we explore whether rhythm has therapeutic potential for promoting recovery from speech and language dysfunctions. Although clinical studies are limited to date, existing experimental evidence demonstrates rich rhythmic organization in both music and language, as well as overlapping brain networks that are crucial in the design of rehabilitation approaches. Here, we propose the “SEP” hypothesis, which postulates that (1) “sound envelope processing” and (2) “synchronization and entrainment to pulse” may help stimulate brain networks that underlie human communication. Ultimately, we hope that the SEP hypothesis will provide a useful framework for facilitating rhythm-based research in various patient populations.
Collapse
Affiliation(s)
- Shinya Fujii
- Heart and Stroke Foundation Canadian Partnership for Stroke Recovery, Sunnybrook Research Institute , Toronto, ON , Canada
| | - Catherine Y Wan
- Department of Radiology, Boston Children's Hospital, Harvard Medical School , Boston, MA , USA
| |
Collapse
|
32
|
Dissociating Cortical Activity during Processing of Native and Non-Native Audiovisual Speech from Early to Late Infancy. Brain Sci 2014; 4:471-87. [PMID: 25116572 PMCID: PMC4194034 DOI: 10.3390/brainsci4030471] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2014] [Revised: 06/27/2014] [Accepted: 07/14/2014] [Indexed: 11/17/2022] Open
Abstract
Initially, infants are capable of discriminating phonetic contrasts across the world's languages. Starting between seven and ten months of age, they gradually lose this ability through a process of perceptual narrowing. Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech). Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds. But infants learn from speech produced by people talking to them, meaning they learn from a complex audiovisual signal. Here, we use near-infrared spectroscopy to measure blood concentration changes in the bilateral temporal cortices of infants in three different age groups: 3-to-6 months, 7-to-10 months, and 11-to-14-months. Critically, all three groups of infants were tested with continuous audiovisual speech in both their native and another, unfamiliar language. We found that at each age range, infants showed different patterns of cortical activity in response to the native and non-native stimuli. Infants in the youngest group showed bilateral cortical activity that was greater overall in response to non-native relative to native speech; the oldest group showed left lateralized activity in response to native relative to non-native speech. These results highlight perceptual tuning as a dynamic process that happens across modalities and at different levels of stimulus complexity.
Collapse
|
33
|
Zündorf IC, Karnath HO, Lewald J. The effect of brain lesions on sound localization in complex acoustic environments. ACTA ACUST UNITED AC 2014; 137:1410-8. [PMID: 24618271 DOI: 10.1093/brain/awu044] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.
Collapse
Affiliation(s)
- Ida C Zündorf
- 1 Centre of Neurology, Division of Neuropsychology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | | | | |
Collapse
|
34
|
Amaral AA, Langers DRM. The relevance of task-irrelevant sounds: hemispheric lateralization and interactions with task-relevant streams. Front Neurosci 2013; 7:264. [PMID: 24409115 PMCID: PMC3873511 DOI: 10.3389/fnins.2013.00264] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2013] [Accepted: 12/16/2013] [Indexed: 11/13/2022] Open
Abstract
The effect of unattended task-irrelevant auditory stimuli in the context of an auditory task is not well understood. Using human functional magnetic resonance imaging (fMRI) we compared blood oxygenation level dependent (BOLD) signal changes resulting from monotic task-irrelevant stimulation, monotic task-relevant stimulation and dichotic stimulation with an attended task-relevant stream to one ear and an unattended task-irrelevant stream to the other ear simultaneously. We found strong bilateral BOLD signal changes in the auditory cortex (AC) resulting from monotic stimulation in a passive listening condition. Consistent with previous work, these responses were largest on the side contralateral to stimulation. AC responses to the unattended (task-irrelevant) sounds were preferentially contralateral and strongest for the most difficult condition. Stronger bilateral AC responses occurred during monotic passive-listening than to an unattended stream presented in a dichotic condition, with attention focused on one ear. Additionally, the visual cortex showed negative responses compared to the baseline in all stimulus conditions including passive listening. Our results suggest that during dichotic listening, with attention focused on one ear, (1) the contralateral and the ipsilateral auditory pathways are suppressively interacting; and (2) cross-modal inhibition occurs during purely acoustic stimulation. These findings support the existence of response suppressions within and between modalities in the presence of competing interfering stimuli.
Collapse
Affiliation(s)
- Ana A Amaral
- International Neuroscience Doctoral Programme, Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown Lisbon, Portugal ; Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen Groningen, Netherlands
| | - Dave R M Langers
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen Groningen, Netherlands ; National Institute for Health Research, Nottingham Hearing Biomedical Research Unit, School of Medicine, University of Nottingham Nottingham, UK
| |
Collapse
|
35
|
Abstract
The fundamental perceptual unit in hearing is the 'auditory object'. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood.
Collapse
|
36
|
Huang S, Chang WT, Belliveau JW, Hämäläinen M, Ahveninen J. Lateralized parietotemporal oscillatory phase synchronization during auditory selective attention. Neuroimage 2013; 86:461-9. [PMID: 24185023 DOI: 10.1016/j.neuroimage.2013.10.043] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2013] [Revised: 09/24/2013] [Accepted: 10/18/2013] [Indexed: 10/26/2022] Open
Abstract
Based on the infamous left-lateralized neglect syndrome, one might hypothesize that the dominating right parietal cortex has a bilateral representation of space, whereas the left parietal cortex represents only the contralateral right hemispace. Whether this principle applies to human auditory attention is not yet fully clear. Here, we explicitly tested the differences in cross-hemispheric functional coupling between the intraparietal sulcus (IPS) and auditory cortex (AC) using combined magnetoencephalography (MEG), EEG, and functional MRI (fMRI). Inter-regional pairwise phase consistency (PPC) was analyzed from data obtained during dichotic auditory selective attention task, where subjects were in 10-s trials cued to attend to sounds presented to one ear and to ignore sounds presented in the opposite ear. Using MEG/EEG/fMRI source modeling, parietotemporal PPC patterns were (a) mapped between all AC locations vs. IPS seeds and (b) analyzed between four anatomically defined AC regions-of-interest (ROI) vs. IPS seeds. Consistent with our hypothesis, stronger cross-hemispheric PPC was observed between the right IPS and left AC for attended right-ear sounds, as compared to PPC between the left IPS and right AC for attended left-ear sounds. In the mapping analyses, these differences emerged at 7-13Hz, i.e., at the theta to alpha frequency bands, and peaked in Heschl's gyrus and lateral posterior non-primary ACs. The ROI analysis revealed similarly lateralized differences also in the beta and lower theta bands. Taken together, our results support the view that the right parietal cortex dominates auditory spatial attention.
Collapse
Affiliation(s)
- Samantha Huang
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - Wei-Tang Chang
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - John W Belliveau
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA, USA
| | - Matti Hämäläinen
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA, USA
| | - Jyrki Ahveninen
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA.
| |
Collapse
|
37
|
Switching auditory attention using spatial and non-spatial features recruits different cortical networks. Neuroimage 2013; 84:681-7. [PMID: 24096028 DOI: 10.1016/j.neuroimage.2013.09.061] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2013] [Revised: 08/21/2013] [Accepted: 09/26/2013] [Indexed: 11/23/2022] Open
Abstract
Switching attention between different stimuli of interest based on particular task demands is important in many everyday settings. In audition in particular, switching attention between different speakers of interest that are talking concurrently is often necessary for effective communication. Recently, it has been shown by multiple studies that auditory selective attention suppresses the representation of unwanted streams in auditory cortical areas in favor of the target stream of interest. However, the neural processing that guides this selective attention process is not well understood. Here we investigated the cortical mechanisms involved in switching attention based on two different types of auditory features. By combining magneto- and electro-encephalography (M-EEG) with an anatomical MRI constraint, we examined the cortical dynamics involved in switching auditory attention based on either spatial or pitch features. We designed a paradigm where listeners were cued in the beginning of each trial to switch or maintain attention halfway through the presentation of concurrent target and masker streams. By allowing listeners time to switch during a gap in the continuous target and masker stimuli, we were able to isolate the mechanisms involved in endogenous, top-down attention switching. Our results show a double dissociation between the involvement of right temporoparietal junction (RTPJ) and the left inferior parietal supramarginal part (LIPSP) in tasks requiring listeners to switch attention based on space and pitch features, respectively, suggesting that switching attention based on these features involves at least partially separate processes or behavioral strategies.
Collapse
|
38
|
Abstract
The challenge of understanding how the brain processes natural signals is compounded by the fact that such signals are often tied closely to specific natural behaviors and natural environments. This added complexity is especially true for auditory communication signals that can carry information at multiple hierarchical levels, and often occur in the context of other competing communication signals. Selective attention provides a mechanism to focus processing resources on specific components of auditory signals, and simultaneously suppress responses to unwanted signals or noise. Although selective auditory attention has been well-studied behaviorally, very little is known about how selective auditory attention shapes the processing on natural auditory signals, and how the mechanisms of auditory attention are implemented in single neurons or neural circuits. Here we review the role of selective attention in modulating auditory responses to complex natural stimuli in humans. We then suggest how the current understanding can be applied to the study of selective auditory attention in the context natural signal processing at the level of single neurons and populations in animal models amenable to invasive neuroscience techniques. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
|
39
|
Alho K, Rinne T, Herron TJ, Woods DL. Stimulus-dependent activations and attention-related modulations in the auditory cortex: a meta-analysis of fMRI studies. Hear Res 2013; 307:29-41. [PMID: 23938208 DOI: 10.1016/j.heares.2013.08.001] [Citation(s) in RCA: 99] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2013] [Revised: 07/22/2013] [Accepted: 08/01/2013] [Indexed: 11/28/2022]
Abstract
We meta-analyzed 115 functional magnetic resonance imaging (fMRI) studies reporting auditory-cortex (AC) coordinates for activations related to active and passive processing of pitch and spatial location of non-speech sounds, as well as to the active and passive speech and voice processing. We aimed at revealing any systematic differences between AC surface locations of these activations by statistically analyzing the activation loci using the open-source Matlab toolbox VAMCA (Visualization and Meta-analysis on Cortical Anatomy). AC activations associated with pitch processing (e.g., active or passive listening to tones with a varying vs. fixed pitch) had median loci in the middle superior temporal gyrus (STG), lateral to Heschl's gyrus. However, median loci of activations due to the processing of infrequent pitch changes in a tone stream were centered in the STG or planum temporale (PT), significantly posterior to the median loci for other types of pitch processing. Median loci of attention-related modulations due to focused attention to pitch (e.g., attending selectively to low or high tones delivered in concurrent sequences) were, in turn, centered in the STG or superior temporal sulcus (STS), posterior to median loci for passive pitch processing. Activations due to spatial processing were centered in the posterior STG or PT, significantly posterior to pitch processing loci (processing of infrequent pitch changes excluded). In the right-hemisphere AC, the median locus of spatial attention-related modulations was in the STS, significantly inferior to the median locus for passive spatial processing. Activations associated with speech processing and those associated with voice processing had indistinguishable median loci at the border of mid-STG and mid-STS. Median loci of attention-related modulations due to attention to speech were in the same mid-STG/STS region. Thus, while attention to the pitch or location of non-speech sounds seems to recruit AC areas less involved in passive pitch or location processing, focused attention to speech predominantly enhances activations in regions that already respond to human vocalizations during passive listening. This suggests that distinct attention mechanisms might be engaged by attention to speech and attention to more elemental auditory features such as tone pitch or location. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Kimmo Alho
- Helsinki Collegium for Advanced Studies, University of Helsinki, PO Box 4, FI 00014 Helsinki, Finland; Institute of Behavioural Sciences, University of Helsinki, PO Box 9, FI 00014 Helsinki, Finland.
| | | | | | | |
Collapse
|
40
|
Ahveninen J, Huang S, Belliveau JW, Chang WT, Hämäläinen M. Dynamic oscillatory processes governing cued orienting and allocation of auditory attention. J Cogn Neurosci 2013; 25:1926-43. [PMID: 23915050 DOI: 10.1162/jocn_a_00452] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In everyday listening situations, we need to constantly switch between alternative sound sources and engage attention according to cues that match our goals and expectations. The exact neuronal bases of these processes are poorly understood. We investigated oscillatory brain networks controlling auditory attention using cortically constrained fMRI-weighted magnetoencephalography/EEG source estimates. During consecutive trials, participants were instructed to shift attention based on a cue, presented in the ear where a target was likely to follow. To promote audiospatial attention effects, the targets were embedded in streams of dichotically presented standard tones. Occasionally, an unexpected novel sound occurred opposite to the cued ear to trigger involuntary orienting. According to our cortical power correlation analyses, increased frontoparietal/temporal 30-100 Hz gamma activity at 200-1400 msec after cued orienting predicted fast and accurate discrimination of subsequent targets. This sustained correlation effect, possibly reflecting voluntary engagement of attention after the initial cue-driven orienting, spread from the TPJ, anterior insula, and inferior frontal cortices to the right FEFs. Engagement of attention to one ear resulted in a significantly stronger increase of 7.5-15 Hz alpha in the ipsilateral than contralateral parieto-occipital cortices 200-600 msec after the cue onset, possibly reflecting cross-modal modulation of the dorsal visual pathway during audiospatial attention. Comparisons of cortical power patterns also revealed significant increases of sustained right medial frontal cortex theta power, right dorsolateral pFC and anterior insula/inferior frontal cortex beta power, and medial parietal cortex and posterior cingulate cortex gamma activity after cued versus novelty-triggered orienting (600-1400 msec). Our results reveal sustained oscillatory patterns associated with voluntary engagement of auditory spatial attention, with the frontoparietal and temporal gamma increases being best predictors of subsequent behavioral performance.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Harvard Medical School-Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA
| | | | | | | | | |
Collapse
|
41
|
Huang S, Seidman LJ, Rossi S, Ahveninen J. Distinct cortical networks activated by auditory attention and working memory load. Neuroimage 2013; 83:1098-108. [PMID: 23921102 DOI: 10.1016/j.neuroimage.2013.07.074] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2013] [Revised: 07/25/2013] [Accepted: 07/28/2013] [Indexed: 02/03/2023] Open
Abstract
Auditory attention and working memory (WM) allow for selection and maintenance of relevant sound information in our minds, respectively, thus underlying goal-directed functioning in everyday acoustic environments. It is still unclear whether these two closely coupled functions are based on a common neural circuit, or whether they involve genuinely distinct subfunctions with separate neuronal substrates. In a full factorial functional MRI (fMRI) design, we independently manipulated the levels of auditory-verbal WM load and attentional interference using modified Auditory Continuous Performance Tests. Although many frontoparietal regions were jointly activated by increases of WM load and interference, there was a double dissociation between prefrontal cortex (PFC) subareas associated selectively with either auditory attention or WM. Specifically, anterior dorsolateral PFC (DLPFC) and the right anterior insula were selectively activated by increasing WM load, whereas subregions of middle lateral PFC and inferior frontal cortex (IFC) were associated with interference only. Meanwhile, a superadditive interaction between interference and load was detected in left medial superior frontal cortex, suggesting that in this area, activations are not only overlapping, but reflect a common resource pool recruited by increased attentional and WM demands. Indices of WM-specific suppression of anterolateral non-primary auditory cortices (AC) and attention-specific suppression of primary AC were also found, possibly reflecting suppression/interruption of sound-object processing of irrelevant stimuli during continuous task performance. Our results suggest a double dissociation between auditory attention and working memory in subregions of anterior DLPFC vs. middle lateral PFC/IFC in humans, respectively, in the context of substantially overlapping circuits.
Collapse
Affiliation(s)
- Samantha Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, USA.
| | | | | | | |
Collapse
|
42
|
Seydell-Greenwald A, Greenberg AS, Rauschecker JP. Are you listening? Brain activation associated with sustained nonspatial auditory attention in the presence and absence of stimulation. Hum Brain Mapp 2013; 35:2233-52. [PMID: 23913818 DOI: 10.1002/hbm.22323] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2012] [Revised: 02/22/2013] [Accepted: 04/15/2013] [Indexed: 11/12/2022] Open
Abstract
Neuroimaging studies investigating the voluntary (top-down) control of attention largely agree that this process recruits several frontal and parietal brain regions. Since most studies used attention tasks requiring several higher-order cognitive functions (e.g. working memory, semantic processing, temporal integration, spatial orienting) as well as different attentional mechanisms (attention shifting, distractor filtering), it is unclear what exactly the observed frontoparietal activations reflect. The present functional magnetic resonance imaging study investigated, within the same participants, signal changes in (1) a "Simple Attention" task in which participants attended to a single melody, (2) a "Selective Attention" task in which they simultaneously ignored another melody, and (3) a "Beep Monitoring" task in which participants listened in silence for a faint beep. Compared to resting conditions with identical stimulation, all tasks produced robust activation increases in auditory cortex, cross-modal inhibition in visual and somatosensory cortex, and decreases in the default mode network, indicating that participants were indeed focusing their attention on the auditory domain. However, signal increases in frontal and parietal brain areas were only observed for tasks 1 and 2, but completely absent for task 3. These results lead to the following conclusions: under most conditions, frontoparietal activations are crucial for attention since they subserve higher-order cognitive functions inherently related to attention. However, under circumstances that minimize other demands, nonspatial auditory attention in the absence of stimulation can be maintained without concurrent frontal or parietal activations.
Collapse
Affiliation(s)
- Anna Seydell-Greenwald
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington DC, 20007
| | | | | |
Collapse
|
43
|
Schmithorst VJ, Farah R, Keith RW. Left ear advantage in speech-related dichotic listening is not specific to auditory processing disorder in children: A machine-learning fMRI and DTI study. NEUROIMAGE-CLINICAL 2013; 3:8-17. [PMID: 24179844 PMCID: PMC3791276 DOI: 10.1016/j.nicl.2013.06.016] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/22/2013] [Revised: 06/24/2013] [Accepted: 06/25/2013] [Indexed: 12/13/2022]
Abstract
Dichotic listening (DL) tests are among the most frequently included in batteries for the diagnosis of auditory processing disorders (APD) in children. A finding of atypical left ear advantage (LEA) for speech-related stimuli is often taken by clinical audiologists as an indicator for APD. However, the precise etiology of ear advantage in DL tests has been a source of debate for decades. It is uncertain whether a finding of LEA is truly indicative of a sensory processing deficit such as APD, or whether attentional or other supramodal factors may also influence ear advantage. Multivariate machine learning was used on diffusion tensor imaging (DTI) and functional MRI (fMRI) data from a cohort of children ages 7–14 referred for APD testing with LEA, and typical controls with right-ear advantage (REA). LEA was predicted by: increased axial diffusivity in the left internal capsule (sublenticular region), and decreased functional activation in the left frontal eye fields (BA 8) during words presented diotically as compared to words presented dichotically, compared to children with right-ear advantage (REA). These results indicate that both sensory and attentional deficits may be predictive of LEA, and thus a finding of LEA, while possibly due to sensory factors, is not a specific indicator of APD as it may stem from a supramodal etiology. Left-ear advantage (LEA) in speech-related dichotic listening tests is atypical. LEA is predicted by differences in functional activation in frontal eye fields. LEA also predicted by differences in WM microstructure in left auditory radiation. LEA is therefore not specific for auditory processing disorder (APD) in children.
Collapse
Affiliation(s)
- Vincent J Schmithorst
- Pediatric Neuroimaging Research Consortium, Dept. of Radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | | | | |
Collapse
|
44
|
Nolden S, Grimault S, Guimond S, Lefebvre C, Bermudez P, Jolicoeur P. The retention of simultaneous tones in auditory short-term memory: a magnetoencephalography study. Neuroimage 2013; 82:384-92. [PMID: 23751862 DOI: 10.1016/j.neuroimage.2013.06.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2013] [Revised: 05/28/2013] [Accepted: 06/01/2013] [Indexed: 10/26/2022] Open
Abstract
We used magnetoencephalography (MEG) to localize brain activity related to the retention of tones differing in pitch. Participants retained one or two simultaneously presented tones. After a two second interval a test tone was presented and the task was to determine if that tone was in memory. We focused on brain activity during the retention interval that increased as the number of sounds retained in auditory short-term memory (ASTM) increased. Source analyses revealed that the superior temporal gyrus in both hemispheres is involved in ASTM. In the right hemisphere, the inferior temporal gyrus, the inferior frontal gyrus, and parietal structures also play a role. Our method provides good spatial and temporal resolution for investigating neuronal correlates of ASTM and, as it is the first MEG study using a memory load manipulation without using sequences of tones, it allowed us to isolate brain regions that most likely reflect the simple retention of tones.
Collapse
Affiliation(s)
- Sophie Nolden
- CERNEC, Université de Montréal, QC, Canada; BRAMS, Montréal, QC, Canada.
| | | | | | | | | | | |
Collapse
|
45
|
Zündorf IC, Lewald J, Karnath HO. Neural correlates of sound localization in complex acoustic environments. PLoS One 2013; 8:e64259. [PMID: 23691185 PMCID: PMC3653868 DOI: 10.1371/journal.pone.0064259] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2013] [Accepted: 04/09/2013] [Indexed: 12/05/2022] Open
Abstract
Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality.
Collapse
Affiliation(s)
- Ida C. Zündorf
- Division of Neuropsychology, Center of Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Ruhr University Bochum, Bochum, Germany
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Hans-Otto Karnath
- Division of Neuropsychology, Center of Neurology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
- Department of Psychology, University of South Carolina, Columbia, South Carolina, United States of America
- * E-mail:
| |
Collapse
|
46
|
Sherwin J, Gaston J. Soldiers and marksmen under fire: monitoring performance with neural correlates of small arms fire localization. Front Hum Neurosci 2013; 7:67. [PMID: 23508091 PMCID: PMC3600534 DOI: 10.3389/fnhum.2013.00067] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2012] [Accepted: 02/18/2013] [Indexed: 11/13/2022] Open
Abstract
Important decisions in the heat of battle occur rapidly and a key aptitude of a good combat soldier is the ability to determine whether he is under fire. This rapid decision requires the soldier to make a judgment in a fraction of a second, based on a barrage of multisensory cues coming from multiple modalities. The present study uses an oddball paradigm to examine listener ability to differentiate shooter locations from audio recordings of small arms fire. More importantly, we address the neural correlates involved in this rapid decision process by employing single-trial analysis of electroencephalography (EEG). In particular, we examine small arms expert listeners as they differentiate the sounds of small arms firing events recorded at different observer positions relative to a shooter. Using signal detection theory, we find clear neural signatures related to shooter firing angle by identifying the times of neural discrimination on a trial-to-trial basis. Similar to previous results in oddball experiments, we find common windows relative to the response and the stimulus when neural activity discriminates between target stimuli (forward fire: observer 0° to firing angle) vs. standards (off-axis fire: observer 90° to firing angle). We also find, using windows of maximum discrimination, that auditory target vs. standard discrimination yields neural sources in Brodmann Area 19 (BA 19), i.e., in the visual cortex. In summary, we show that single-trial analysis of EEG yields informative scalp distributions and source current localization of discriminating activity when the small arms experts discriminate between forward and off-axis fire observer positions. Furthermore, this perceptual decision implicates brain regions involved in visual processing, even though the task is purely auditory. Finally, we utilize these techniques to quantify the level of expertise in these subjects for the chosen task, having implications for human performance monitoring in combat.
Collapse
Affiliation(s)
- Jason Sherwin
- Department of Biomedical Engineering, Columbia University New York, NY, USA ; Human Research and Engineering Directorate, US Army Research Laboratory Aberdeen, MD, USA
| | | |
Collapse
|
47
|
Separable networks for top-down attention to auditory non-spatial and visuospatial modalities. Neuroimage 2013; 74:77-86. [PMID: 23435206 PMCID: PMC3898942 DOI: 10.1016/j.neuroimage.2013.02.023] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2012] [Revised: 01/25/2013] [Accepted: 02/02/2013] [Indexed: 12/20/2022] Open
Abstract
A central question for cognitive neuroscience is whether there is a single neural system controlling the allocation of attention. A dorsal frontoparietal network of brain regions is often proposed as a mediator of top-down attention to all sensory inputs. We used functional magnetic resonance imaging in humans to show that the cortical networks supporting top-down attention are in fact modality-specific, with distinct superior fronto-parietal and fronto-temporal networks for visuospatial and non-spatial auditory attention respectively. In contrast, parts of the right middle and inferior frontal gyri showed a common response to attentional control regardless of modality, providing evidence that the amodal component of attention is restricted to the anterior cortex.
Collapse
|
48
|
The right planum temporale is involved in stimulus-driven, auditory attention--evidence from transcranial magnetic stimulation. PLoS One 2013; 8:e57316. [PMID: 23437367 PMCID: PMC3577729 DOI: 10.1371/journal.pone.0057316] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2012] [Accepted: 01/21/2013] [Indexed: 11/19/2022] Open
Abstract
It is well known that the planum temporale (PT) area in the posterior temporal lobe carries out spectro-temporal analysis of auditory stimuli, which is crucial for speech, for example. There are suggestions that the PT is also involved in auditory attention, specifically in the discrimination and selection of stimuli from the left and right ear. However, direct evidence is missing so far. To examine the role of the PT in auditory attention we asked fourteen participants to complete the Bergen Dichotic Listening Test. In this test two different consonant-vowel syllables (e.g., “ba” and “da”) are presented simultaneously, one to each ear, and participants are asked to verbally report the syllable they heard best or most clearly. Thus attentional selection of a syllable is stimulus-driven. Each participant completed the test three times: after their left and right PT (located with anatomical brain scans) had been stimulated with repetitive transcranial magnetic stimulation (rTMS), which transiently interferes with normal brain functioning in the stimulated sites, and after sham stimulation, where participants were led to believe they had been stimulated but no rTMS was applied (control). After sham stimulation the typical right ear advantage emerged, that is, participants reported relatively more right than left ear syllables, reflecting a left-hemispheric dominance for language. rTMS over the right but not left PT significantly reduced the right ear advantage. This was the result of participants reporting more left and fewer right ear syllables after right PT stimulation, suggesting there was a leftward shift in stimulus selection. Taken together, our findings point to a new function of the PT in addition to auditory perception: particularly the right PT is involved in stimulus selection and (stimulus-driven), auditory attention.
Collapse
|
49
|
Archila-Suerte P, Zevin J, Ramos AI, Hernandez AE. The neural basis of non-native speech perception in bilingual children. Neuroimage 2013; 67:51-63. [PMID: 23123633 PMCID: PMC5942220 DOI: 10.1016/j.neuroimage.2012.10.023] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2012] [Revised: 09/17/2012] [Accepted: 10/15/2012] [Indexed: 10/27/2022] Open
Abstract
The goal of the present study is to reveal how the neural mechanisms underlying non-native speech perception change throughout childhood. In a pre-attentive listening fMRI task, English monolingual and Spanish-English bilingual children - divided into groups of younger (6-8yrs) and older children (9-10yrs) - were asked to watch a silent movie while several English syllable combinations played through a pair of headphones. Two additional groups of monolingual and bilingual adults were included in the analyses. Our results show that the neural mechanisms supporting speech perception throughout development differ in monolinguals and bilinguals. While monolinguals recruit perceptual areas (i.e., superior temporal gyrus) in early and late childhood to process native speech, bilinguals recruit perceptual areas (i.e., superior temporal gyrus) in early childhood and higher-order executive areas in late childhood (i.e., bilateral middle frontal gyrus and bilateral inferior parietal lobule, among others) to process non-native speech. The findings support the Perceptual Assimilation Model and the Speech Learning Model and suggest that the neural system processes phonological information differently depending on the stage of L2 speech learning.
Collapse
|
50
|
Guerreiro MJS, Murphy DR, Van Gerven PWM. Making sense of age-related distractibility: the critical role of sensory modality. Acta Psychol (Amst) 2013; 142:184-94. [PMID: 23337081 DOI: 10.1016/j.actpsy.2012.11.007] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2012] [Revised: 10/26/2012] [Accepted: 11/14/2012] [Indexed: 11/29/2022] Open
Abstract
Older adults are known to have reduced inhibitory control and therefore to be more distractible than young adults. Recently, we have proposed that sensory modality plays a crucial role in age-related distractibility. In this study, we examined age differences in vulnerability to unimodal and cross-modal visual and auditory distraction. A group of 24 younger (mean age=21.7 years) and 22 older adults (mean age=65.4 years) performed visual and auditory n-back tasks while ignoring visual and auditory distraction. Whereas reaction time data indicated that both young and older adults are particularly affected by unimodal distraction, accuracy data revealed that older adults, but not younger adults, are vulnerable to cross-modal visual distraction. These results support the notion that age-related distractibility is modality dependent.
Collapse
Affiliation(s)
- Maria J S Guerreiro
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands.
| | | | | |
Collapse
|