1
|
Reversible Inactivation of Ferret Auditory Cortex Impairs Spatial and Nonspatial Hearing. J Neurosci 2023; 43:749-763. [PMID: 36604168 PMCID: PMC9899081 DOI: 10.1523/jneurosci.1426-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 11/16/2022] [Accepted: 11/29/2022] [Indexed: 01/06/2023] Open
Abstract
A key question in auditory neuroscience is to what extent are brain regions functionally specialized for processing specific sound features, such as location and identity. In auditory cortex, correlations between neural activity and sounds support both the specialization of distinct cortical subfields, and encoding of multiple sound features within individual cortical areas. However, few studies have tested the contribution of auditory cortex to hearing in multiple contexts. Here we determined the role of ferret primary auditory cortex in both spatial and nonspatial hearing by reversibly inactivating the middle ectosylvian gyrus during behavior using cooling (n = 2 females) or optogenetics (n = 1 female). Optogenetic experiments used the mDLx promoter to express Channelrhodopsin-2 in GABAergic interneurons, and we confirmed both viral expression (n = 2 females) and light-driven suppression of spiking activity in auditory cortex, recorded using Neuropixels under anesthesia (n = 465 units from 2 additional untrained female ferrets). Cortical inactivation via cooling or optogenetics impaired vowel discrimination in colocated noise. Ferrets implanted with cooling loops were tested in additional conditions that revealed no deficit when identifying vowels in clean conditions, or when the temporally coincident vowel and noise were spatially separated by 180 degrees. These animals did, however, show impaired sound localization when inactivating the same auditory cortical region implicated in vowel discrimination in noise. Our results demonstrate that, as a brain region showing mixed selectivity for spatial and nonspatial features of sound, primary auditory cortex contributes to multiple forms of hearing.SIGNIFICANCE STATEMENT Neurons in primary auditory cortex are often sensitive to the location and identity of sounds. Here we inactivated auditory cortex during spatial and nonspatial listening tasks using cooling, or optogenetics. Auditory cortical inactivation impaired multiple behaviors, demonstrating a role in both the analysis of sound location and identity and confirming a functional contribution of mixed selectivity observed in neural activity. Parallel optogenetic experiments in two additional untrained ferrets linked behavior to physiology by demonstrating that expression of Channelrhodopsin-2 permitted rapid light-driven suppression of auditory cortical activity recorded under anesthesia.
Collapse
|
2
|
May KR, Tomlinson BJ, Ma X, Roberts P, Walker BN. Spotlights and Soundscapes. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2020. [DOI: 10.1145/3378576] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
For persons with visual impairment, forming cognitive maps of unfamiliar interior spaces can be challenging. Various technical developments have converged to make it feasible, without specialized equipment, to represent a variety of useful landmark objects via spatial audio, rather than solely dispensing route information. Although such systems could be key to facilitating cognitive map formation, high-density auditory environments must be crafted carefully to avoid overloading the listener. This article recounts a set of research exercises with potential users, in which the optimization of such systems was explored. In Experiment 1, a virtual reality environment was used to rapidly prototype and adjust the auditory environment in response to participant comments. In Experiment 2, three variants of the system were evaluated in terms of their effectiveness in a real-world building. This methodology revealed a variety of optimization approaches and recommendations for designing dense mixed-reality auditory environments aimed at supporting cognitive map formation by visually impaired persons.
Collapse
Affiliation(s)
| | | | - Xiaomeng Ma
- Georgia Institute of Technology, Atlanta, Georgia
| | | | | |
Collapse
|
3
|
Tissieres I, Crottaz-Herbette S, Clarke S. Implicit representation of the auditory space: contribution of the left and right hemispheres. Brain Struct Funct 2019; 224:1569-1582. [PMID: 30848352 DOI: 10.1007/s00429-019-01853-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Accepted: 02/25/2019] [Indexed: 11/24/2022]
Abstract
Spatial cues contribute to the ability to segregate sound sources and thus facilitate their detection and recognition. This implicit use of spatial cues can be preserved in cases of cortical spatial deafness, suggesting that partially distinct neural networks underlie the explicit sound localization and the implicit use of spatial cues. We addressed this issue by assessing 40 patients, 20 patients with left and 20 patients with right hemispheric damage, for their ability to use auditory spatial cues implicitly in a paradigm of spatial release from masking (SRM) and explicitly in sound localization. The anatomical correlates of their performance were determined with voxel-based lesion-symptom mapping (VLSM). During the SRM task, the target was always presented at the centre, whereas the masker was presented at the centre or at one of the two lateral positions on the right or left side. The SRM effect was absent in some but not all patients; the inability to perceive the target when the masker was at one of the lateral positions correlated with lesions of the left temporo-parieto-frontal cortex or of the right inferior parietal lobule and the underlying white matter. As previously reported, sound localization depended critically on the right parietal and opercular cortex. Thus, explicit and implicit use of spatial cues depends on at least partially distinct neural networks. Our results suggest that the implicit use may rely on the left-dominant position-linked representation of sound objects, which has been demonstrated in previous EEG and fMRI studies.
Collapse
Affiliation(s)
- Isabel Tissieres
- Service de neuropsychologie et de neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV), Université de Lausanne, Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- Service de neuropsychologie et de neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV), Université de Lausanne, Lausanne, Switzerland
| | - Stephanie Clarke
- Service de neuropsychologie et de neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV), Université de Lausanne, Lausanne, Switzerland.
| |
Collapse
|
4
|
Audiovisual Lexical Retrieval Deficits Following Left Hemisphere Stroke. Brain Sci 2018; 8:brainsci8120206. [PMID: 30486517 PMCID: PMC6316523 DOI: 10.3390/brainsci8120206] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2018] [Revised: 11/18/2018] [Accepted: 11/27/2018] [Indexed: 11/27/2022] Open
Abstract
Binding sensory features of multiple modalities of what we hear and see allows formation of a coherent percept to access semantics. Previous work on object naming has focused on visual confrontation naming with limited research in nonverbal auditory or multisensory processing. To investigate neural substrates and sensory effects of lexical retrieval, we evaluated healthy adults (n = 118) and left hemisphere stroke patients (LHD, n = 42) in naming manipulable objects across auditory (sound), visual (picture), and multisensory (audiovisual) conditions. LHD patients were divided into cortical, cortical–subcortical, or subcortical lesions (CO, CO–SC, SC), and specific lesion location investigated in a predictive model. Subjects produced lower accuracy in auditory naming relative to other conditions. Controls demonstrated greater naming accuracy and faster reaction times across all conditions compared to LHD patients. Naming across conditions was most severely impaired in CO patients. Both auditory and visual naming accuracy were impacted by temporal lobe involvement, although auditory naming was sensitive to lesions extending subcortically. Only controls demonstrated significant improvement over visual naming with the addition of auditory cues (i.e., multisensory condition). Results support overlapping neural networks for visual and auditory modalities related to semantic integration in lexical retrieval and temporal lobe involvement, while multisensory integration was impacted by both occipital and temporal lobe lesion involvement. The findings support modality specificity in naming and suggest that auditory naming is mediated by a distributed cortical–subcortical network overlapping with networks mediating spatiotemporal aspects of skilled movements producing sound.
Collapse
|
5
|
Da Costa S, Clarke S, Crottaz-Herbette S. Keeping track of sound objects in space: The contribution of early-stage auditory areas. Hear Res 2018; 366:17-31. [PMID: 29643021 DOI: 10.1016/j.heares.2018.03.027] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 03/21/2018] [Accepted: 03/28/2018] [Indexed: 12/01/2022]
Abstract
The influential dual-stream model of auditory processing stipulates that information pertaining to the meaning and to the position of a given sound object is processed in parallel along two distinct pathways, the ventral and dorsal auditory streams. Functional independence of the two processing pathways is well documented by conscious experience of patients with focal hemispheric lesions. On the other hand there is growing evidence that the meaning and the position of a sound are combined early in the processing pathway, possibly already at the level of early-stage auditory areas. Here, we investigated how early auditory areas integrate sound object meaning and space (simulated by interaural time differences) using a repetition suppression fMRI paradigm at 7 T. Subjects listen passively to environmental sounds presented in blocks of repetitions of the same sound object (same category) or different sounds objects (different categories), perceived either in the left or right space (no change within block) or shifted left-to-right or right-to-left halfway in the block (change within block). Environmental sounds activated bilaterally the superior temporal gyrus, middle temporal gyrus, inferior frontal gyrus, and right precentral cortex. Repetitions suppression effects were measured within bilateral early-stage auditory areas in the lateral portion of the Heschl's gyrus and posterior superior temporal plane. Left lateral early-stages areas showed significant effects for position and change, interactions Category x Initial Position and Category x Change in Position, while right lateral areas showed main effect of category and interaction Category x Change in Position. The combined evidence from our study and from previous studies speaks in favour of a position-linked representation of sound objects, which is independent from semantic encoding within the ventral stream and from spatial encoding within the dorsal stream. We argue for a third auditory stream, which has its origin in lateral belt areas and tracks sound objects across space.
Collapse
Affiliation(s)
- Sandra Da Costa
- Centre d'Imagerie BioMédicale (CIBM), EPFL et Universités de Lausanne et de Genève, Bâtiment CH, Station 6, CH-1015 Lausanne, Switzerland.
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| |
Collapse
|
6
|
For Better or Worse: The Effect of Prismatic Adaptation on Auditory Neglect. Neural Plast 2017; 2017:8721240. [PMID: 29138699 PMCID: PMC5613466 DOI: 10.1155/2017/8721240] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2017] [Accepted: 08/08/2017] [Indexed: 12/01/2022] Open
Abstract
Patients with auditory neglect attend less to auditory stimuli on their left and/or make systematic directional errors when indicating sound positions. Rightward prismatic adaptation (R-PA) was repeatedly shown to alleviate symptoms of visuospatial neglect and once to restore partially spatial bias in dichotic listening. It is currently unknown whether R-PA affects only this ear-related symptom or also other aspects of auditory neglect. We have investigated the effect of R-PA on left ear extinction in dichotic listening, space-related inattention assessed by diotic listening, and directional errors in auditory localization in patients with auditory neglect. The most striking effect of R-PA was the alleviation of left ear extinction in dichotic listening, which occurred in half of the patients with initial deficit. In contrast to nonresponders, their lesions spared the right dorsal attentional system and posterior temporal cortex. The beneficial effect of R-PA on an ear-related performance contrasted with detrimental effects on diotic listening and auditory localization. The former can be parsimoniously explained by the SHD-VAS model (shift in hemispheric dominance within the ventral attentional system; Clarke and Crottaz-Herbette 2016), which is based on the R-PA-induced shift of the right-dominant ventral attentional system to the left hemisphere. The negative effects in space-related tasks may be due to the complex nature of auditory space encoding at a cortical level.
Collapse
|
7
|
Holmes E, Kitterick PT, Summerfield AQ. EEG activity evoked in preparation for multi-talker listening by adults and children. Hear Res 2016; 336:83-100. [PMID: 27178442 DOI: 10.1016/j.heares.2016.04.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/13/2016] [Revised: 04/04/2016] [Accepted: 04/28/2016] [Indexed: 12/01/2022]
Abstract
Selective attention is critical for successful speech perception because speech is often encountered in the presence of other sounds, including the voices of competing talkers. Faced with the need to attend selectively, listeners perceive speech more accurately when they know characteristics of upcoming talkers before they begin to speak. However, the neural processes that underlie the preparation of selective attention for voices are not fully understood. The current experiments used electroencephalography (EEG) to investigate the time course of brain activity during preparation for an upcoming talker in young adults aged 18-27 years with normal hearing (Experiments 1 and 2) and in typically-developing children aged 7-13 years (Experiment 3). Participants reported key words spoken by a target talker when an opposite-gender distractor talker spoke simultaneously. The two talkers were presented from different spatial locations (±30° azimuth). Before the talkers began to speak, a visual cue indicated either the location (left/right) or the gender (male/female) of the target talker. Adults evoked preparatory EEG activity that started shortly after (<50 ms) the visual cue was presented and was sustained until the talkers began to speak. The location cue evoked similar preparatory activity in Experiments 1 and 2 with different samples of participants. The gender cue did not evoke preparatory activity when it predicted gender only (Experiment 1) but did evoke preparatory activity when it predicted the identity of a specific talker with greater certainty (Experiment 2). Location cues evoked significant preparatory EEG activity in children but gender cues did not. The results provide converging evidence that listeners evoke consistent preparatory brain activity for selecting a talker by their location (regardless of their gender or identity), but not by their gender alone.
Collapse
Affiliation(s)
- Emma Holmes
- Department of Psychology, University of York, UK.
| | - Padraig T Kitterick
- NIHR Nottingham Hearing Biomedical Research Unit, UK; Division of Clinical Neuroscience, School of Medicine, University of Nottingham, UK
| | - A Quentin Summerfield
- Department of Psychology, University of York, UK; Hull York Medical School, University of York, UK
| |
Collapse
|
8
|
Zündorf IC, Lewald J, Karnath HO. Testing the dual-pathway model for auditory processing in human cortex. Neuroimage 2015; 124:672-681. [PMID: 26388552 DOI: 10.1016/j.neuroimage.2015.09.026] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2015] [Revised: 09/09/2015] [Accepted: 09/10/2015] [Indexed: 11/16/2022] Open
Abstract
Analogous to the visual system, auditory information has been proposed to be processed in two largely segregated streams: an anteroventral ("what") pathway mainly subserving sound identification and a posterodorsal ("where") stream mainly subserving sound localization. Despite the popularity of this assumption, the degree of separation of spatial and non-spatial auditory information processing in cortex is still under discussion. In the present study, a statistical approach was implemented to investigate potential behavioral dissociations for spatial and non-spatial auditory processing in stroke patients, and voxel-wise lesion analyses were used to uncover their neural correlates. The results generally provided support for anatomically and functionally segregated auditory networks. However, some degree of anatomo-functional overlap between "what" and "where" aspects of processing was found in the superior pars opercularis of right inferior frontal gyrus (Brodmann area 44), suggesting the potential existence of a shared target area of both auditory streams in this region. Moreover, beyond the typically defined posterodorsal stream (i.e., posterior superior temporal gyrus, inferior parietal lobule, and superior frontal sulcus), occipital lesions were found to be associated with sound localization deficits. These results, indicating anatomically and functionally complex cortical networks for spatial and non-spatial auditory processing, are roughly consistent with the dual-pathway model of auditory processing in its original form, but argue for the need to refine and extend this widely accepted hypothesis.
Collapse
Affiliation(s)
- Ida C Zündorf
- Center of Neurology, Division of Neuropsychology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, Bochum, Germany; Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Hans-Otto Karnath
- Center of Neurology, Division of Neuropsychology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany; Department of Psychology, University of South Carolina, Columbia, SC 29208, USA.
| |
Collapse
|
9
|
Da Costa S, Bourquin NMP, Knebel JF, Saenz M, van der Zwaag W, Clarke S. Representation of Sound Objects within Early-Stage Auditory Areas: A Repetition Effect Study Using 7T fMRI. PLoS One 2015; 10:e0124072. [PMID: 25938430 PMCID: PMC4418571 DOI: 10.1371/journal.pone.0124072] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2014] [Accepted: 02/25/2015] [Indexed: 11/26/2022] Open
Abstract
Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds.
Collapse
Affiliation(s)
- Sandra Da Costa
- Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
- * E-mail:
| | - Nathalie M.-P. Bourquin
- Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| | - Jean-François Knebel
- National Center of Competence in Research, SYNAPSY—The Synaptic Bases of Mental Diseases, Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| | - Melissa Saenz
- Laboratoire de Recherche en Neuroimagerie, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| | - Wietske van der Zwaag
- Centre d’Imagerie BioMédicale, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, Département des Neurosciences Cliniques, Centre Hospitalier Universitaire Vaudois, Université de Lausanne, Lausanne, Switzerland
| |
Collapse
|
10
|
Clarke S, Bindschaedler C, Crottaz-Herbette S. Impact of Cognitive Neuroscience on Stroke Rehabilitation. Stroke 2015; 46:1408-13. [DOI: 10.1161/strokeaha.115.007435] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2015] [Accepted: 02/11/2015] [Indexed: 11/16/2022]
Affiliation(s)
- Stephanie Clarke
- From the Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Lausanne, Switzerland
| | - Claire Bindschaedler
- From the Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- From the Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Lausanne, Switzerland
| |
Collapse
|
11
|
Abstract
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition.
Collapse
Affiliation(s)
- L Robert Slevc
- Department of Psychology, University of Maryland, College Park, MD, USA.
| | - Alison R Shell
- Department of Psychology, University of Maryland, College Park, MD, USA
| |
Collapse
|
12
|
Ahveninen J, Huang S, Nummenmaa A, Belliveau JW, Hung AY, Jääskeläinen IP, Rauschecker JP, Rossi S, Tiitinen H, Raij T. Evidence for distinct human auditory cortex regions for sound location versus identity processing. Nat Commun 2014; 4:2585. [PMID: 24121634 PMCID: PMC3932554 DOI: 10.1038/ncomms3585] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2013] [Accepted: 09/10/2013] [Indexed: 11/16/2022] Open
Abstract
Neurophysiological animal models suggest that anterior auditory cortex (AC) areas process sound-identity information, whereas posterior ACs specialize in sound location processing. In humans, inconsistent neuroimaging results and insufficient causal evidence have challenged the existence of such parallel AC organization. Here we transiently inhibit bilateral anterior or posterior AC areas using MRI-guided paired-pulse transcranial magnetic stimulation (TMS) while subjects listen to Reference/Probe sound pairs and perform either sound location or identity discrimination tasks. The targeting of TMS pulses, delivered 55–145 ms after Probes, is confirmed with individual-level cortical electric-field estimates. Our data show that TMS to posterior AC regions delays reaction times (RT) significantly more during sound location than identity discrimination, whereas TMS to anterior AC regions delays RTs significantly more during sound identity than location discrimination. This double dissociation provides direct causal support for parallel processing of sound identity features in anterior AC and sound location in posterior AC.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Harvard Medical School-Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Building 149, 13th Street, Charlestown, Massachusetts 02129, USA
| | | | | | | | | | | | | | | | | | | |
Collapse
|
13
|
Auditory-cortex short-term plasticity induced by selective attention. Neural Plast 2014; 2014:216731. [PMID: 24551458 PMCID: PMC3914570 DOI: 10.1155/2014/216731] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2013] [Accepted: 12/15/2013] [Indexed: 11/23/2022] Open
Abstract
The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, “short-term plasticity”, might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance.
Collapse
|
14
|
Du Y, He Y, Arnott SR, Ross B, Wu X, Li L, Alain C. Rapid tuning of auditory "what" and "where" pathways by training. ACTA ACUST UNITED AC 2013; 25:496-506. [PMID: 24042339 DOI: 10.1093/cercor/bht251] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Behavioral improvement within the first hour of training is commonly explained as procedural learning (i.e., strategy changes resulting from task familiarization). However, it may additionally reflect a rapid adjustment of the perceptual and/or attentional system in a goal-directed task. In support of this latter hypothesis, we show feature-specific gains in performance for groups of participants briefly trained to use either a spectral or spatial difference between 2 vowels presented simultaneously during a vowel identification task. In both groups, the neuromagnetic activity measured during the vowel identification task following training revealed source activity in auditory cortices, prefrontal, inferior parietal, and motor areas. More importantly, the contrast between the 2 groups revealed a striking double dissociation in which listeners trained on spectral or spatial cues showed higher source activity in ventral ("what") and dorsal ("where") brain areas, respectively. These feature-specific effects indicate that brief training can implicitly bias top-down processing to a trained acoustic cue and induce a rapid recalibration of the ventral and dorsal auditory streams during speech segregation and identification.
Collapse
Affiliation(s)
- Yi Du
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1 Department of Psychology, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China and
| | - Yu He
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1
| | - Stephen R Arnott
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1
| | - Bernhard Ross
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1
| | - Xihong Wu
- Department of Psychology, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China and
| | - Liang Li
- Department of Psychology, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China and
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1 Department of Psychology, University of Toronto, Ontario, Canada M8V 2S4
| |
Collapse
|
15
|
Ahveninen J, Kopčo N, Jääskeläinen IP. Psychophysics and neuronal bases of sound localization in humans. Hear Res 2013; 307:86-97. [PMID: 23886698 DOI: 10.1016/j.heares.2013.07.008] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2013] [Revised: 07/02/2013] [Accepted: 07/10/2013] [Indexed: 10/26/2022]
Abstract
Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory "where" pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Harvard Medical School - Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA.
| | | | | |
Collapse
|
16
|
Poremba A, Bigelow J, Rossi B. Processing of communication sounds: contributions of learning, memory, and experience. Hear Res 2013; 305:31-44. [PMID: 23792078 DOI: 10.1016/j.heares.2013.06.005] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2012] [Revised: 05/09/2013] [Accepted: 06/10/2013] [Indexed: 11/17/2022]
Abstract
Abundant evidence from both field and lab studies has established that conspecific vocalizations (CVs) are of critical ecological significance for a wide variety of species, including humans, non-human primates, rodents, and other mammals and birds. Correspondingly, a number of experiments have demonstrated behavioral processing advantages for CVs, such as in discrimination and memory tasks. Further, a wide range of experiments have described brain regions in many species that appear to be specialized for processing CVs. For example, several neural regions have been described in both mammals and birds wherein greater neural responses are elicited by CVs than by comparison stimuli such as heterospecific vocalizations, nonvocal complex sounds, and artificial stimuli. These observations raise the question of whether these regions reflect domain-specific neural mechanisms dedicated to processing CVs, or alternatively, if these regions reflect domain-general neural mechanisms for representing complex sounds of learned significance. Inasmuch as CVs can be viewed as complex combinations of basic spectrotemporal features, the plausibility of the latter position is supported by a large body of literature describing modulated cortical and subcortical representation of a variety of acoustic features that have been experimentally associated with stimuli of natural behavioral significance (such as food rewards). Herein, we review a relatively small body of existing literature describing the roles of experience, learning, and memory in the emergence of species-typical neural representations of CVs and auditory system plasticity. In both songbirds and mammals, manipulations of auditory experience as well as specific learning paradigms are shown to modulate neural responses evoked by CVs, either in terms of overall firing rate or temporal firing patterns. In some cases, CV-sensitive neural regions gradually acquire representation of non-CV stimuli with which subjects have training and experience. These results parallel literature in humans describing modulation of responses in face-sensitive neural regions through learning and experience. Thus, although many questions remain, the available evidence is consistent with the notion that CVs may acquire distinct neural representation through domain-general mechanisms for representing complex auditory objects that are of learned importance to the animal. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
Affiliation(s)
- Amy Poremba
- University of Iowa, Dept. of Psychology, Div. Behavioral & Cognitive Neuroscience, E11 SSH, Iowa City, IA 52242, USA; University of Iowa, Neuroscience Program, Iowa City, IA 52242, USA.
| | | | | |
Collapse
|
17
|
Sangai NP, Verma RJ, Trivedi MH. Testing the efficacy of quercetin in mitigating bisphenol A toxicity in liver and kidney of mice. Toxicol Ind Health 2012; 30:581-97. [PMID: 23024108 DOI: 10.1177/0748233712457438] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Quercetin (3,5,7,3',4'-pentahydroxy flavone) is a potent antioxidant found in various fruits and vegetables. The present investigation was an attempt to evaluate the mitigatory effect of quercetin on the damage caused by bisphenol A (BPA; 2,2-bis (4-hydroxyphenyl) propane), a well-known xenoestrogen, on liver and kidney of mice. Swiss strain adult male albino mice were orally administered with 120 and 240 mg/kg body weight (bw)/day BPA with or without quercetin (60 mg/kg bw/day) for 30 days. On the completion of the treatment period, animals were killed; organs were isolated and used for the study. Results revealed that oral administration of BPA for 30 days caused significant and dose-dependent decrease in body weight. Absolute and relative organ weights, total lipid and cholesterol contents were significantly increased in liver and kidney of mice when compared with vehicle control. BPA treatment also caused, when compared with vehicle control, a statistically significant reductions in the activities of catalase, superoxide dismutase, glutathione peroxidase, glutathione reductase and glutathione-S-transferase as well as in glutathione and total ascorbic acid contents; however, significant increase was found in malondialdehyde (MDA) levels. Histopathological studies revealed hepatocellular necrosis, cytoplasmic vacuolization and decrease in hepatocellular compactness in liver as well as distortion of the tubules, increased vacuolization, necrosis and disorganization of glomerulus in the kidney of BPA-treated mice. All these effects were dose-dependent. Co-treatment with quercetin (60 mg/kg bw) and BPA (low dose and high dose) alleviates the changes in body weight, as well as absolute and relative organ weights of mice. It also ameliorates the oxidative stress created by BPA by lowering MDA levels and by increasing enzymatic and nonenzymatic antioxidants as well as it brings back the normal histoarchitecture of liver and kidney of mice. The present results revealed that graded doses of BPA caused oxidative damage in liver and kidney of mice, which is mitigated by quercetin.
Collapse
Affiliation(s)
- Neha P Sangai
- Department of Zoology, University School of Sciences, Gujarat University, Ahmedabad, India
| | - Ramtej J Verma
- Department of Zoology, University School of Sciences, Gujarat University, Ahmedabad, India
| | - Mrugesh H Trivedi
- Department of Earth and Environmental Sciences, K.S.K.V. Kachch University, Mundra Road, Bhuj, India
| |
Collapse
|
18
|
Manuel AL, Radman N, Mesot D, Chouiter L, Clarke S, Annoni JM, Spierer L. Inter- and Intrahemispheric Dissociations in Ideomotor Apraxia: A Large-Scale Lesion–Symptom Mapping Study in Subacute Brain-Damaged Patients. Cereb Cortex 2012; 23:2781-9. [DOI: 10.1093/cercor/bhs280] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
|
19
|
Abstract
Neuronal mechanisms of auditory distance perception are poorly understood, largely because contributions of intensity and distance processing are difficult to differentiate. Typically, the received intensity increases when sound sources approach us. However, we can also distinguish between soft-but-nearby and loud-but-distant sounds, indicating that distance processing can also be based on intensity-independent cues. Here, we combined behavioral experiments, fMRI measurements, and computational analyses to identify the neural representation of distance independent of intensity. In a virtual reverberant environment, we simulated sound sources at varying distances (15-100 cm) along the right-side interaural axis. Our acoustic analysis suggested that, of the individual intensity-independent depth cues available for these stimuli, direct-to-reverberant ratio (D/R) is more reliable and robust than interaural level difference (ILD). However, on the basis of our behavioral results, subjects' discrimination performance was more consistent with complex intensity-independent distance representations, combining both available cues, than with representations on the basis of either D/R or ILD individually. fMRI activations to sounds varying in distance (containing all cues, including intensity), compared with activations to sounds varying in intensity only, were significantly increased in the planum temporale and posterior superior temporal gyrus contralateral to the direction of stimulation. This fMRI result suggests that neurons in posterior nonprimary auditory cortices, in or near the areas processing other auditory spatial features, are sensitive to intensity-independent sound properties relevant for auditory distance perception.
Collapse
|
20
|
Duffour-Nikolov C, Tardif E, Maeder P, Thiran AB, Bloch J, Frischknecht R, Clarke S. Auditory spatial deficits following hemispheric lesions: dissociation of explicit and implicit processing. Neuropsychol Rehabil 2012; 22:674-96. [PMID: 22672110 DOI: 10.1080/09602011.2012.686818] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Auditory spatial deficits occur frequently after hemispheric damage; a previous case report suggested that the explicit awareness of sound positions, as in sound localisation, can be impaired while the implicit use of auditory cues for the segregation of sound objects in noisy environments remains preserved. By assessing systematically patients with a first hemispheric lesion, we have shown that (1) explicit and/or implicit use can be disturbed; (2) impaired explicit vs. preserved implicit use dissociations occur rather frequently; and (3) different types of sound localisation deficits can be associated with preserved implicit use. Conceptually, the dissociation between the explicit and implicit use may reflect the dual-stream dichotomy of auditory processing. Our results speak in favour of systematic assessments of auditory spatial functions in clinical settings, especially when adaptation to auditory environment is at stake. Further, systematic studies are needed to link deficits of explicit vs. implicit use to disability in everyday activities, to design appropriate rehabilitation strategies, and to ascertain how far the explicit and implicit use of spatial cues can be retrained following brain damage.
Collapse
|
21
|
Nodal FR, Bajo VM, King AJ. Plasticity of spatial hearing: behavioural effects of cortical inactivation. J Physiol 2012; 590:3965-86. [PMID: 22547635 PMCID: PMC3464400 DOI: 10.1113/jphysiol.2011.222828] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The contribution of auditory cortex to spatial information processing was explored behaviourally in adult ferrets by reversibly deactivating different cortical areas by subdural placement of a polymer that released the GABAA agonist muscimol over a period of weeks. The spatial extent and time course of cortical inactivation were determined electrophysiologically. Muscimol-Elvax was placed bilaterally over the anterior (AEG), middle (MEG) or posterior ectosylvian gyrus (PEG), so that different regions of the auditory cortex could be deactivated in different cases. Sound localization accuracy in the horizontal plane was assessed by measuring both the initial head orienting and approach-to-target responses made by the animals. Head orienting behaviour was unaffected by silencing any region of the auditory cortex, whereas the accuracy of approach-to-target responses to brief sounds (40 ms noise bursts) was reduced by muscimol-Elvax but not by drug-free implants. Modest but significant localization impairments were observed after deactivating the MEG, AEG or PEG, although the largest deficits were produced in animals in which the MEG, where the primary auditory fields are located, was silenced. We also examined experience-induced spatial plasticity by reversibly plugging one ear. In control animals, localization accuracy for both approach-to-target and head orienting responses was initially impaired by monaural occlusion, but recovered with training over the next few days. Deactivating any part of the auditory cortex resulted in less complete recovery than in controls, with the largest deficits observed after silencing the higher-level cortical areas in the AEG and PEG. Although suggesting that each region of auditory cortex contributes to spatial learning, differences in the localization deficits and degree of adaptation between groups imply a regional specialization in the processing of spatial information across the auditory cortex.
Collapse
Affiliation(s)
- Fernando R Nodal
- Department of Physiology, Anatomy and Genetics, Sherrington Building, University of Oxford, Parks Road, Oxford OX1 3PT, UK.
| | | | | |
Collapse
|
22
|
Tillein J, Hubka P, Kral A. Sensitivity to interaural time differences with binaural implants: is it in the brain? Cochlear Implants Int 2011; 12 Suppl 1:S44-50. [PMID: 21756472 DOI: 10.1179/146701011x13001035753344] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
23
|
Bizley J, King R. What Can Multisensory Processing Tell Us about the Functional Organization of Auditory Cortex? Front Neurosci 2011. [DOI: 10.1201/b11092-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
24
|
Bizley J, King R. What Can Multisensory Processing Tell Us about the Functional Organization of Auditory Cortex? Front Neurosci 2011. [DOI: 10.1201/9781439812174-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
25
|
Alain C, Shen D, Yu H, Grady C. Dissociable memory- and response-related activity in parietal cortex during auditory spatial working memory. Front Psychol 2010; 1:202. [PMID: 21833258 PMCID: PMC3153808 DOI: 10.3389/fpsyg.2010.00202] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2010] [Accepted: 10/27/2010] [Indexed: 11/16/2022] Open
Abstract
Attending and responding to sound location generates increased activity in parietal cortex which may index auditory spatial working memory and/or goal-directed action. Here, we used an n-back task (Experiment 1) and an adaptation paradigm (Experiment 2) to distinguish memory-related activity from that associated with goal-directed action. In Experiment 1, participants indicated, in separate blocks of trials, whether the incoming stimulus was presented at the same location as in the previous trial (1-back) or two trials ago (2-back). Prior to a block of trials, participants were told to use their left or right index finger. Accuracy and reaction times were worse for the 2-back than for the 1-back condition. The analysis of functional magnetic resonance imaging data revealed greater sustained task-related activity in the inferior parietal lobule (IPL) and superior frontal sulcus during 2-back than 1-back after accounting for response-related activity elicited by the targets. Target detection and response execution were also associated with enhanced activity in the IPL bilaterally, though the activation was anterior to that associated with sustained task-related activity. In Experiment 2, we used an event-related design in which participants listened (no response required) to trials that comprised four sounds presented either at the same location or at four different locations. We found larger IPL activation for changes in sound location than for sounds presented at the same location. The IPL activation overlapped with that observed during the auditory spatial working memory task. Together, these results provide converging evidence supporting the role of parietal cortex in auditory spatial working memory which can be dissociated from response selection and execution.
Collapse
Affiliation(s)
- Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care Toronto, ON, Canada
| | | | | | | |
Collapse
|
26
|
Bizley JK, Walker KMM. Sensitivity and selectivity of neurons in auditory cortex to the pitch, timbre, and location of sounds. Neuroscientist 2010; 16:453-69. [PMID: 20530254 DOI: 10.1177/1073858410371009] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
We are able to rapidly recognize and localize the many sounds in our environment. We can describe any of these sounds in terms of various independent "features" such as their loudness, pitch, or position in space. However, we still know surprisingly little about how neurons in the auditory brain, specifically the auditory cortex, might form representations of these perceptual characteristics from the information that the ear provides about sound acoustics. In this article, the authors examine evidence that the auditory cortex is necessary for processing the pitch, timbre, and location of sounds, and document how neurons across multiple auditory cortical fields might represent these as trains of action potentials. They conclude by asking whether neurons in different regions of the auditory cortex might not be simply sensitive to each of these three sound features but whether they might be selective for one of them. The few studies that have examined neural sensitivity to multiple sound attributes provide only limited support for neural selectivity within auditory cortex. Providing an explanation of the neural basis of feature invariance is thus one of the major challenges to sensory neuroscience obtaining the ultimate goal of understanding how neural firing patterns in the brain give rise to perception.
Collapse
Affiliation(s)
- Jennifer K Bizley
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom.
| | | |
Collapse
|
27
|
Robertson JVG, Hoellinger T, Lindberg P, Bensmail D, Hanneton S, Roby-Brami A. Effect of auditory feedback differs according to side of hemiparesis: a comparative pilot study. J Neuroeng Rehabil 2009; 6:45. [PMID: 20017935 PMCID: PMC2804659 DOI: 10.1186/1743-0003-6-45] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2009] [Accepted: 12/17/2009] [Indexed: 11/21/2022] Open
Abstract
Background Following stroke, patients frequently demonstrate loss of motor control and function and altered kinematic parameters of reaching movements. Feedback is an essential component of rehabilitation and auditory feedback of kinematic parameters may be a useful tool for rehabilitation of reaching movements at the impairment level. The aim of this study was to investigate the effect of 2 types of auditory feedback on the kinematics of reaching movements in hemiparetic stroke patients and to compare differences between patients with right (RHD) and left hemisphere damage (LHD). Methods 10 healthy controls, 8 stroke patients with LHD and 8 with RHD were included. Patient groups had similar levels of upper limb function. Two types of auditory feedback (spatial and simple) were developed and provided online during reaching movements to 9 targets in the workspace. Kinematics of the upper limb were recorded with an electromagnetic system. Kinematics were compared between groups (Mann Whitney test) and the effect of auditory feedback on kinematics was tested within each patient group (Friedman test). Results In the patient groups, peak hand velocity was lower, the number of velocity peaks was higher and movements were more curved than in the healthy group. Despite having a similar clinical level, kinematics differed between LHD and RHD groups. Peak velocity was similar but LHD patients had fewer velocity peaks and less curved movements than RHD patients. The addition of auditory feedback improved the curvature index in patients with RHD and deteriorated peak velocity, the number of velocity peaks and curvature index in LHD patients. No difference between types of feedback was found in either patient group. Conclusion In stroke patients, side of lesion should be considered when examining arm reaching kinematics. Further studies are necessary to evaluate differences in responses to auditory feedback between patients with lesions in opposite cerebral hemispheres.
Collapse
Affiliation(s)
- Johanna V G Robertson
- Laboratoire de Neurophysique et Physiologie, Université Paris Descartes, CNRS UMR 8119, Paris, France.
| | | | | | | | | | | |
Collapse
|
28
|
Tillein J, Hubka P, Syed E, Hartmann R, Engel A, Kral A. Cortical Representation of Interaural Time Difference in Congenital Deafness. Cereb Cortex 2009; 20:492-506. [DOI: 10.1093/cercor/bhp222] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
29
|
Goll JC, Crutch SJ, Loo JHY, Rohrer JD, Frost C, Bamiou DE, Warren JD. Non-verbal sound processing in the primary progressive aphasias. ACTA ACUST UNITED AC 2009; 133:272-85. [PMID: 19797352 PMCID: PMC2801322 DOI: 10.1093/brain/awp235] [Citation(s) in RCA: 97] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Little is known about the processing of non-verbal sounds in the primary progressive aphasias. Here, we investigated the processing of complex non-verbal sounds in detail, in a consecutive series of 20 patients with primary progressive aphasia [12 with progressive non-fluent aphasia; eight with semantic dementia]. We designed a novel experimental neuropsychological battery to probe complex sound processing at early perceptual, apperceptive and semantic levels, using within-modality response procedures that minimized other cognitive demands and matching tests in the visual modality. Patients with primary progressive aphasia had deficits of non-verbal sound analysis compared with healthy age-matched individuals. Deficits of auditory early perceptual analysis were more common in progressive non-fluent aphasia, deficits of apperceptive processing occurred in both progressive non-fluent aphasia and semantic dementia, and deficits of semantic processing also occurred in both syndromes, but were relatively modality specific in progressive non-fluent aphasia and part of a more severe generic semantic deficit in semantic dementia. Patients with progressive non-fluent aphasia were more likely to show severe auditory than visual deficits as compared to patients with semantic dementia. These findings argue for the existence of core disorders of complex non-verbal sound perception and recognition in primary progressive aphasia and specific disorders at perceptual and semantic levels of cortical auditory processing in progressive non-fluent aphasia and semantic dementia, respectively.
Collapse
Affiliation(s)
- Johanna C Goll
- Dementia Research Centre, Institute of Neurology, University College London, Queen Square, London WC1N 3BG, UK
| | | | | | | | | | | | | |
Collapse
|
30
|
Staeren N, Renvall H, De Martino F, Goebel R, Formisano E. Sound categories are represented as distributed patterns in the human auditory cortex. Curr Biol 2009; 19:498-502. [PMID: 19268594 DOI: 10.1016/j.cub.2009.01.066] [Citation(s) in RCA: 105] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2008] [Revised: 12/12/2008] [Accepted: 01/27/2009] [Indexed: 10/21/2022]
Abstract
The ability to recognize sounds allows humans and animals to efficiently detect behaviorally relevant events, even in the absence of visual information. Sound recognition in the human brain has been assumed to proceed through several functionally specialized areas, culminating in cortical modules where category-specific processing is carried out. In the present high-resolution fMRI experiment, we challenged this model by using well-controlled natural auditory stimuli and by employing an advanced analysis strategy based on an iterative machine-learning algorithm that allows modeling of spatially distributed, as well as localized, response patterns. Sounds of cats, female singers, acoustic guitars, and tones were controlled for their time-varying spectral characteristics and presented to subjects at three different pitch levels. Sound category information--not detectable with conventional contrast-based methods analysis--could be detected with multivoxel pattern analyses and attributed to spatially distributed areas over the supratemporal cortices. A more localized pattern was observed for processing of pitch laterally to primary auditory areas. Our findings indicate that distributed neuronal populations within the human auditory cortices, including areas conventionally associated with lower-level auditory processing, entail categorical representations of sounds beyond their physical properties.
Collapse
Affiliation(s)
- Noël Staeren
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, University of Maastricht, 6200 MD Maastricht, The Netherlands
| | | | | | | | | |
Collapse
|
31
|
Alain C, McDonald KL, Kovacevic N, McIntosh AR. Spatiotemporal Analysis of Auditory "What" and "Where" Working Memory. Cereb Cortex 2008; 19:305-14. [PMID: 18534993 DOI: 10.1093/cercor/bhn082] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Affiliation(s)
- Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, ON M6A 2E1, Canada.
| | | | | | | |
Collapse
|
32
|
Interactions between auditory 'what' and 'where' pathways revealed by enhanced near-threshold discrimination of frequency and position. Neuropsychologia 2007; 46:958-66. [PMID: 18191423 DOI: 10.1016/j.neuropsychologia.2007.11.016] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2007] [Revised: 11/09/2007] [Accepted: 11/23/2007] [Indexed: 11/21/2022]
Abstract
Partially segregated neuronal pathways ("what" and "where" pathways, respectively) are thought to mediate sound recognition and localization. Less studied are interactions between these pathways. In two experiments, we investigated whether near-threshold pitch discrimination sensitivity (d') is altered by supra-threshold task-irrelevant position differences and likewise whether near-threshold position discrimination sensitivity is altered by supra-threshold task-irrelevant pitch differences. Each experiment followed a 2 x 2 within-subjects design regarding changes/no change in the task-relevant and task-irrelevant stimulus dimensions. In Experiment 1, subjects discriminated between 750 Hz and 752 Hz pure tones, and d' for this near-threshold pitch change significantly increased by a factor of 1.09 when accompanied by a task-irrelevant position change of 65 micros interaural time difference (ITD). No response bias was induced by the task-irrelevant position change. In Experiment 2, subjects discriminated between 385 micros and 431 micros ITDs, and d' for this near-threshold position change significantly increased by a factor of 0.73 when accompanied by task-irrelevant pitch changes (6 Hz). In contrast to Experiment 1, task-irrelevant pitch changes induced a response criterion bias toward responding that the two stimuli differed. The collective results are indicative of facilitative interactions between "what" and "where" pathways. By demonstrating how these pathways may cooperate under impoverished listening conditions, our results bear implications for possible neuro-rehabilitation strategies. We discuss our results in terms of the dual-pathway model of auditory processing.
Collapse
|
33
|
Deouell LY, Heller AS, Malach R, D'Esposito M, Knight RT. Cerebral responses to change in spatial location of unattended sounds. Neuron 2007; 55:985-96. [PMID: 17880900 DOI: 10.1016/j.neuron.2007.08.019] [Citation(s) in RCA: 86] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2006] [Revised: 07/01/2007] [Accepted: 08/20/2007] [Indexed: 11/20/2022]
Abstract
The neural basis of spatial processing in the auditory cortex has been controversial. Human fMRI studies suggest that a part of the planum temporale (PT) is involved in auditory spatial processing, but it was recently argued that this region is active only when the task requires voluntary spatial localization. If this is the case, then this region cannot harbor an ongoing spatial representation of the acoustic environment. In contrast, we show in three fMRI experiments that a region in the human medial PT is sensitive to background auditory spatial changes, even when subjects are not engaged in a spatial localization task, and in fact attend the visual modality. During such times, this area responded to rare location shifts, and even more so when spatial variation increased, consistent with spatially selective adaptation. Thus, acoustic space is represented in the human PT even when sound processing is not required by the ongoing task.
Collapse
Affiliation(s)
- Leon Y Deouell
- Department of Psychology and the Interdisciplinary Center for Neural Computation, The Hebrew University of Jerusalem, Jerusalem 91905, Israel.
| | | | | | | | | |
Collapse
|
34
|
Lehnert G, Zimmer HD. Modality and domain specific components in auditory and visual working memory tasks. Cogn Process 2007; 9:53-61. [PMID: 17891428 DOI: 10.1007/s10339-007-0187-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2007] [Revised: 09/04/2007] [Accepted: 09/05/2007] [Indexed: 11/24/2022]
Abstract
In the tripartite model of working memory (WM) it is postulated that a unique part system-the visuo-spatial sketchpad (VSSP)-processes non-verbal content. Due to behavioral and neurophysiological findings, the VSSP was later subdivided into visual object and visual spatial processing, the former representing objects' appearance and the latter spatial information. This distinction is well supported. However, a challenge to this model is the question how spatial information from non-visual sensory modalities, for example the auditory one, is processed. Only a few studies so far have directly compared visual and auditory spatial WM. They suggest that the distinction of two processing domains--one for object and one for spatial information--also holds true for auditory WM, but that only a part of the processes is modality specific. We propose that processing in the object domain (the item's appearance) is modality specific, while spatial WM as well as object-location binding relies on modality general processes.
Collapse
Affiliation(s)
- Günther Lehnert
- Brain and Cognition Unit, Department of Psychology, Saarland University, Saarbrücken, Germany.
| | | |
Collapse
|
35
|
Ischebeck AK, Friederici AD, Alter K. Processing Prosodic Boundaries in Natural and Hummed Speech: An fMRI Study. Cereb Cortex 2007; 18:541-52. [PMID: 17591598 DOI: 10.1093/cercor/bhm083] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Speech contains prosodic cues such as pauses between different phrases of a sentence. These intonational phrase boundaries (IPBs) elicit a specific component in event-related brain potential studies, the so-called closure positive shift. The aim of the present functional magnetic resonance imaging study is to identify the neural correlates of this prosody-related component in sentences containing segmental and prosodic information (natural speech) and hummed sentences only containing prosodic information. Sentences with 2 IPBs both in normal and hummed speech activated the middle superior temporal gyrus, the rolandic operculum, and the gyrus of Heschl more strongly than sentences with 1 IPB. The results from a region of interest analysis of auditory cortex and auditory association areas suggest that the posterior rolandic operculum, in particular, supports the processing of prosodic information. A comparison of natural speech and hummed sentences revealed a number of left-hemispheric areas within the temporal lobe as well as in the frontal and parietal lobe that were activated more strongly for natural speech than for hummed sentences. These areas constitute the neural network for the processing of natural speech. The finding that no area was activated more strongly for hummed sentences compared with natural speech suggests that prosody is an integrated part of natural speech.
Collapse
Affiliation(s)
- Anja K Ischebeck
- Clinical Department of Neurology, Innsbruck Medical University, 6020 Innsbruck, Austria.
| | | | | |
Collapse
|
36
|
Meyer M, Baumann S, Marchina S, Jancke L. Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation. BMC Neurosci 2007; 8:14. [PMID: 17284307 PMCID: PMC1800857 DOI: 10.1186/1471-2202-8-14] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2006] [Accepted: 02/06/2007] [Indexed: 11/12/2022] Open
Abstract
Background Recent findings of a tight coupling between visual and auditory association cortices during multisensory perception in monkeys and humans raise the question whether consistent paired presentation of simple visual and auditory stimuli prompts conditioned responses in unimodal auditory regions or multimodal association cortex once visual stimuli are presented in isolation in a post-conditioning run. To address this issue fifteen healthy participants partook in a "silent" sparse temporal event-related fMRI study. In the first (visual control) habituation phase they were presented with briefly red flashing visual stimuli. In the second (auditory control) habituation phase they heard brief telephone ringing. In the third (conditioning) phase we coincidently presented the visual stimulus (CS) paired with the auditory stimulus (UCS). In the fourth phase participants either viewed flashes paired with the auditory stimulus (maintenance, CS-) or viewed the visual stimulus in isolation (extinction, CS+) according to a 5:10 partial reinforcement schedule. The participants had no other task than attending to the stimuli and indicating the end of each trial by pressing a button. Results During unpaired visual presentations (preceding and following the paired presentation) we observed significant brain responses beyond primary visual cortex in the bilateral posterior auditory association cortex (planum temporale, planum parietale) and in the right superior temporal sulcus whereas the primary auditory regions were not involved. By contrast, the activity in auditory core regions was markedly larger when participants were presented with auditory stimuli. Conclusion These results demonstrate involvement of multisensory and auditory association areas in perception of unimodal visual stimulation which may reflect the instantaneous forming of multisensory associations and cannot be attributed to sensation of an auditory event. More importantly, we are able to show that brain responses in multisensory cortices do not necessarily emerge from associative learning but even occur spontaneously to simple visual stimulation.
Collapse
Affiliation(s)
- Martin Meyer
- Institute of Neuroradiology, University Hospital of Zurich, Switzerland
- Department of Neuropsychology, University of Zurich, Switzerland
| | - Simon Baumann
- Department of Neuropsychology, University of Zurich, Switzerland
- School of Neurology, Neurobiology and Psychiatry, Newcastle University, UK
- School of Psychology, Brain & Behaviour, Newcastle University, UK
| | - Sarah Marchina
- Department of Neuropsychology, University of Zurich, Switzerland
| | - Lutz Jancke
- Department of Neuropsychology, University of Zurich, Switzerland
| |
Collapse
|
37
|
Meyer M, Baumann S, Jancke L. Electrical brain imaging reveals spatio-temporal dynamics of timbre perception in humans. Neuroimage 2006; 32:1510-23. [PMID: 16798014 DOI: 10.1016/j.neuroimage.2006.04.193] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2005] [Revised: 03/29/2006] [Accepted: 04/10/2006] [Indexed: 11/27/2022] Open
Abstract
Timbre is a major attribute of sound perception and a key feature for the identification of sound quality. Here, we present event-related brain potentials (ERPs) obtained from sixteen healthy individuals while they discriminated complex instrumental tones (piano, trumpet, and violin) or simple sine wave tones that lack the principal features of timbre. Data analysis yielded enhanced N1 and P2 responses to instrumental tones relative to sine wave tones. Furthermore, we applied an electrical brain imaging approach using low-resolution electromagnetic tomography (LORETA) to estimate the neural sources of N1/P2 responses. Separate significance tests of instrumental vs. sine wave tones for N1 and P2 revealed distinct regions as principally governing timbre perception. In an initial stage (N1), timbre perception recruits left and right (peri-)auditory fields with an activity maximum over the right posterior Sylvian fissure (SF) and the posterior cingulate (PCC) territory. In the subsequent stage (P2), we uncovered enhanced activity in the vicinity of the entire cingulate gyrus. The involvement of extra-auditory areas in timbre perception may imply the presence of a highly associative processing level which might be generally related to musical sensations and integrates widespread medial areas of the human cortex. In summary, our results demonstrate spatio-temporally distinct stages in timbre perception which not only involve bilateral parts of the peri-auditory cortex but also medially situated regions of the human brain associated with emotional and auditory imagery functions.
Collapse
Affiliation(s)
- Martin Meyer
- Department of Neuropsychology, University of Zurich, Treichlerstrasse 10, CH-8032 Zurich, Switzerland.
| | | | | |
Collapse
|
38
|
Ahveninen J, Jääskeläinen IP, Raij T, Bonmassar G, Devore S, Hämäläinen M, Levänen S, Lin FH, Sams M, Shinn-Cunningham BG, Witzel T, Belliveau JW. Task-modulated "what" and "where" pathways in human auditory cortex. Proc Natl Acad Sci U S A 2006; 103:14608-13. [PMID: 16983092 PMCID: PMC1600007 DOI: 10.1073/pnas.0510480103] [Citation(s) in RCA: 245] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2005] [Indexed: 11/18/2022] Open
Abstract
Human neuroimaging studies suggest that localization and identification of relevant auditory objects are accomplished via parallel parietal-to-lateral-prefrontal "where" and anterior-temporal-to-inferior-frontal "what" pathways, respectively. Using combined hemodynamic (functional MRI) and electromagnetic (magnetoencephalography) measurements, we investigated whether such dual pathways exist already in the human nonprimary auditory cortex, as suggested by animal models, and whether selective attention facilitates sound localization and identification by modulating these pathways in a feature-specific fashion. We found a double dissociation in response adaptation to sound pairs with phonetic vs. spatial sound changes, demonstrating that the human nonprimary auditory cortex indeed processes speech-sound identity and location in parallel anterior "what" (in anterolateral Heschl's gyrus, anterior superior temporal gyrus, and posterior planum polare) and posterior "where" (in planum temporale and posterior superior temporal gyrus) pathways as early as approximately 70-150 ms from stimulus onset. Our data further show that the "where" pathway is activated approximately 30 ms earlier than the "what" pathway, possibly enabling the brain to use top-down spatial information in auditory object perception. Notably, selectively attending to phonetic content modulated response adaptation in the "what" pathway, whereas attending to sound location produced analogous effects in the "where" pathway. This finding suggests that selective-attention effects are feature-specific in the human nonprimary auditory cortex and that they arise from enhanced tuning of receptive fields of task-relevant neuronal populations.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Harvard Medical School-Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, CNY 149 13th Street, Charlestown, MA 02129, USA.
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
39
|
Escabí MA, Read HL. Neural mechanisms for spectral analysis in the auditory midbrain, thalamus, and cortex. INTERNATIONAL REVIEW OF NEUROBIOLOGY 2006; 70:207-52. [PMID: 16472636 DOI: 10.1016/s0074-7742(05)70007-6] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
Affiliation(s)
- Monty A Escabí
- Department of Electrical Engineering, University of Connecticut, Storrs, Connecticut 06269, USA
| | | |
Collapse
|
40
|
Engelien A, Tüscher O, Hermans W, Isenberg N, Eidelberg D, Frith C, Stern E, Silbersweig D. Functional neuroanatomy of non-verbal semantic sound processing in humans. J Neural Transm (Vienna) 2005; 113:599-608. [PMID: 16075182 DOI: 10.1007/s00702-005-0342-0] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2005] [Accepted: 06/03/2005] [Indexed: 10/25/2022]
Abstract
Environmental sounds convey specific meanings and the neural circuitry for their recognition may have preceded language. To dissociate semantic mnemonic from sensory perceptual processing of non-verbal sound stimuli we systematically altered the inherent semantic properties of non-verbal sounds from natural and man-made sources while keeping their acoustic characteristics closely matched. We hypothesized that acoustic analysis of complex non-verbal sounds would be right lateralized in auditory cortex regardless of meaning content and that left hemisphere regions would be engaged when meaningful concept could be extracted. Using H(2) (15)O-PET imaging and SPM data analysis, we demonstrated that activation of the left superior temporal and left parahippocampal gyrus along with left inferior frontal regions was specifically associated with listening to meaningful sounds. In contrast, for both types of sounds, acoustic analysis was associated with activation of right auditory cortices. We conclude that left hemisphere brain regions are engaged when sounds are meaningful or intelligible.
Collapse
Affiliation(s)
- A Engelien
- Functional Neuroimaging Laboratory, Department of Psychiatry, Weill Medical College, Cornell University, New York, NY, USA.
| | | | | | | | | | | | | | | |
Collapse
|
41
|
Arnott SR, Grady CL, Hevenor SJ, Graham S, Alain C. The functional organization of auditory working memory as revealed by fMRI. J Cogn Neurosci 2005; 17:819-31. [PMID: 15904548 DOI: 10.1162/0898929053747612] [Citation(s) in RCA: 79] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Spatial and nonspatial auditory tasks preferentially recruit dorsal and ventral brain areas, respectively. However, the extent to which these auditory differences reflect specific aspects of mental processing has not been directly studied. In the present functional magnetic resonance imaging experiment, participants encoded and maintained either the location or the identity of a sound for a delay period of several seconds and then subsequently compared that information with a second sound. Relative to sound localization, sound identification was associated with greater hemodynamic activity in the left rostral superior temporal gyrus. In contrast, localizing sounds recruited greater activity in the parietal cortex, posterior temporal lobe, and superior frontal sulcus. The identification differences were most prominent during the early stage of the trial, whereas the location differences were most evident during the late (i.e., comparison) stage. Accordingly, our results suggest that auditory spatial and identity dissociations as revealed by functional imaging may be dependent to some degree on the type of processing being carried out. In addition, dorsolateral prefrontal and lateral superior parietal areas showed greater activity during the comparison as opposed to the earlier stage of the trial, regardless of the type of auditory task, consistent with results from visual working memory studies.
Collapse
|
42
|
Ducommun CY, Michel CM, Clarke S, Adriani M, Seeck M, Landis T, Blanke O. Cortical Motion Deafness. Neuron 2004; 43:765-77. [PMID: 15363389 DOI: 10.1016/j.neuron.2004.08.020] [Citation(s) in RCA: 36] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2004] [Revised: 06/09/2004] [Accepted: 07/10/2004] [Indexed: 11/22/2022]
Abstract
The extent to which the auditory system, like the visual system, processes spatial stimulus characteristics such as location and motion in separate specialized neuronal modules or in one homogeneously distributed network is unresolved. Here we present a patient with a selective deficit for the perception and discrimination of auditory motion following resection of the right anterior temporal lobe and the right posterior superior temporal gyrus (STG). Analysis of stimulus identity and location within the auditory scene remained intact. In addition, intracranial auditory evoked potentials, recorded preoperatively, revealed motion-specific responses selectively over the resected right posterior STG, and electrical cortical stimulation of this region was experienced by the patient as incoming moving sounds. Collectively, these data present a patient with cortical motion deafness, providing evidence that cortical processing of auditory motion is performed in a specialized module within the posterior STG.
Collapse
Affiliation(s)
- Christine Y Ducommun
- Functional Brain Mapping Laboratory, University Hospital, 1211 Geneva, Switzerland.
| | | | | | | | | | | | | |
Collapse
|