51
|
|
52
|
Da Costa S, Clarke S, Crottaz-Herbette S. Keeping track of sound objects in space: The contribution of early-stage auditory areas. Hear Res 2018; 366:17-31. [PMID: 29643021 DOI: 10.1016/j.heares.2018.03.027] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 03/21/2018] [Accepted: 03/28/2018] [Indexed: 12/01/2022]
Abstract
The influential dual-stream model of auditory processing stipulates that information pertaining to the meaning and to the position of a given sound object is processed in parallel along two distinct pathways, the ventral and dorsal auditory streams. Functional independence of the two processing pathways is well documented by conscious experience of patients with focal hemispheric lesions. On the other hand there is growing evidence that the meaning and the position of a sound are combined early in the processing pathway, possibly already at the level of early-stage auditory areas. Here, we investigated how early auditory areas integrate sound object meaning and space (simulated by interaural time differences) using a repetition suppression fMRI paradigm at 7 T. Subjects listen passively to environmental sounds presented in blocks of repetitions of the same sound object (same category) or different sounds objects (different categories), perceived either in the left or right space (no change within block) or shifted left-to-right or right-to-left halfway in the block (change within block). Environmental sounds activated bilaterally the superior temporal gyrus, middle temporal gyrus, inferior frontal gyrus, and right precentral cortex. Repetitions suppression effects were measured within bilateral early-stage auditory areas in the lateral portion of the Heschl's gyrus and posterior superior temporal plane. Left lateral early-stages areas showed significant effects for position and change, interactions Category x Initial Position and Category x Change in Position, while right lateral areas showed main effect of category and interaction Category x Change in Position. The combined evidence from our study and from previous studies speaks in favour of a position-linked representation of sound objects, which is independent from semantic encoding within the ventral stream and from spatial encoding within the dorsal stream. We argue for a third auditory stream, which has its origin in lateral belt areas and tracks sound objects across space.
Collapse
Affiliation(s)
- Sandra Da Costa
- Centre d'Imagerie BioMédicale (CIBM), EPFL et Universités de Lausanne et de Genève, Bâtiment CH, Station 6, CH-1015 Lausanne, Switzerland.
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| |
Collapse
|
53
|
Rinne T, Muers RS, Salo E, Slater H, Petkov CI. Functional Imaging of Audio-Visual Selective Attention in Monkeys and Humans: How do Lapses in Monkey Performance Affect Cross-Species Correspondences? Cereb Cortex 2018; 27:3471-3484. [PMID: 28419201 PMCID: PMC5654311 DOI: 10.1093/cercor/bhx092] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2016] [Indexed: 11/22/2022] Open
Abstract
The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio–visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio–visual selective attention modulates the primate brain, identify sources for “lost” attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals.
Collapse
Affiliation(s)
- Teemu Rinne
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.,Advanced Magnetic Imaging Centre, Aalto University School of Science, Espoo, Finland
| | - Ross S Muers
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK.,Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, UK
| | - Emma Salo
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Heather Slater
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK.,Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, UK
| | - Christopher I Petkov
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK.,Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
54
|
Yellamsetty A, Bidelman GM. Low- and high-frequency cortical brain oscillations reflect dissociable mechanisms of concurrent speech segregation in noise. Hear Res 2018; 361:92-102. [PMID: 29398142 DOI: 10.1016/j.heares.2018.01.006] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Revised: 12/09/2017] [Accepted: 01/12/2018] [Indexed: 10/18/2022]
Abstract
Parsing simultaneous speech requires listeners use pitch-guided segregation which can be affected by the signal-to-noise ratio (SNR) in the auditory scene. The interaction of these two cues may occur at multiple levels within the cortex. The aims of the current study were to assess the correspondence between oscillatory brain rhythms and determine how listeners exploit pitch and SNR cues to successfully segregate concurrent speech. We recorded electrical brain activity while participants heard double-vowel stimuli whose fundamental frequencies (F0s) differed by zero or four semitones (STs) presented in either clean or noise-degraded (+5 dB SNR) conditions. We found that behavioral identification was more accurate for vowel mixtures with larger pitch separations but F0 benefit interacted with noise. Time-frequency analysis decomposed the EEG into different spectrotemporal frequency bands. Low-frequency (θ, β) responses were elevated when speech did not contain pitch cues (0ST > 4ST) or was noisy, suggesting a correlate of increased listening effort and/or memory demands. Contrastively, γ power increments were observed for changes in both pitch (0ST > 4ST) and SNR (clean > noise), suggesting high-frequency bands carry information related to acoustic features and the quality of speech representations. Brain-behavior associations corroborated these effects; modulations in low-frequency rhythms predicted the speed of listeners' perceptual decisions with higher bands predicting identification accuracy. Results are consistent with the notion that neural oscillations reflect both automatic (pre-perceptual) and controlled (post-perceptual) mechanisms of speech processing that are largely divisible into high- and low-frequency bands of human brain rhythms.
Collapse
Affiliation(s)
- Anusha Yellamsetty
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
| | - Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; Univeristy of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| |
Collapse
|
55
|
Rauschecker JP. Where, When, and How: Are they all sensorimotor? Towards a unified view of the dorsal pathway in vision and audition. Cortex 2018; 98:262-268. [PMID: 29183630 PMCID: PMC5771843 DOI: 10.1016/j.cortex.2017.10.020] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Revised: 08/19/2017] [Accepted: 10/12/2017] [Indexed: 10/18/2022]
Abstract
Dual processing streams in sensory systems have been postulated for a long time. Much experimental evidence has been accumulated from behavioral, neuropsychological, electrophysiological, neuroanatomical and neuroimaging work supporting the existence of largely segregated cortical pathways in both vision and audition. More recently, debate has returned to the question of overlap between these pathways and whether there aren't really more than two processing streams. The present piece defends the dual-system view. Focusing on the functions of the dorsal stream in the auditory and language system I try to reconcile the various models of Where, How and When into one coherent concept of sensorimotor integration. This framework incorporates principles of internal models in feedback control systems and is applicable to the visual system as well.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA; Institute for Advanced Study, Technische Universität München, Garching bei München, Germany.
| |
Collapse
|
56
|
Tissieres I, Fornari E, Clarke S, Crottaz-Herbette S. Supramodal effect of rightward prismatic adaptation on spatial representations within the ventral attentional system. Brain Struct Funct 2017; 223:1459-1471. [PMID: 29151115 DOI: 10.1007/s00429-017-1572-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2017] [Accepted: 11/15/2017] [Indexed: 10/18/2022]
Abstract
Rightward prismatic adaptation (R-PA) was shown to alleviate not only visuo-spatial but also auditory symptoms in neglect. The neural mechanisms underlying the effect of R-PA have been previously investigated in visual tasks, demonstrating a shift of hemispheric dominance for visuo-spatial attention from the right to the left hemisphere both in normal subjects and in patients. We have investigated whether the same neural mechanisms underlie the supramodal effect of R-PA on auditory attention. Normal subjects underwent a brief session of R-PA, which was preceded and followed by an fMRI evaluation during which subjects detected targets within the left, central and right space in the auditory or visual modality. R-PA-related changes in activation patterns were found bilaterally in the inferior parietal lobule. In either modality, the representation of the left, central and right space increased in the left IPL, whereas the representation of the right space decreased in the right IPL. Thus, a brief exposure to R-PA modulated the representation of the auditory and visual space within the ventral attentional system. This shift in hemispheric dominance for auditory spatial attention offers a parsimonious explanation for the previously reported effects of R-PA on auditory symptoms in neglect.
Collapse
Affiliation(s)
- Isabel Tissieres
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Av. Pierre-Decker 5, 1011, Lausanne, Switzerland
| | - Eleonora Fornari
- CIBM (Centre d'Imagerie Biomédicale), Department of Radiology, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, 1011, Lausanne, Switzerland
| | - Stephanie Clarke
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Av. Pierre-Decker 5, 1011, Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Av. Pierre-Decker 5, 1011, Lausanne, Switzerland.
| |
Collapse
|
57
|
Involvement of ordinary what and where auditory cortical areas during illusory perception. Brain Struct Funct 2017; 223:965-979. [PMID: 29071383 DOI: 10.1007/s00429-017-1538-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2016] [Accepted: 10/07/2017] [Indexed: 10/18/2022]
Abstract
The focus of the present study is on the relationships between illusory and non-illusory auditory perception analyzed at a biological level. To this aim, we investigate neural mechanisms underlying the Deutsch's illusion, a condition in which both sound identity ("what") and origin ("where") are deceptively perceived. We recorded magnetoencephalogram from healthy subjects in three conditions: (a) listening to the acoustic sequence eliciting the illusion (ILL), (b) listening to a monaural acoustic sequence mimicking the illusory percept (MON), and (c) listening to an acoustic sequence similar to (a) but not eliciting the illusion (NIL). Results show that the areas involved in the illusion were the Heschl's gyrus, the insular cortex, the inferior frontal gyrus, and the medial-frontal gyrus bilaterally, together with the left inferior-parietal lobe. These areas belong to the two main auditory streams known as the what and where pathways. The neural responses there observed indicate that the sound sequence eliciting the illusion is associated to larger activity at early and middle latencies and to a dynamic lateralization pattern net in favor of the left hemisphere. The present findings extend to illusory perception the well-known what-where auditory processing mechanism, especially as regards tardy latency activity.
Collapse
|
58
|
Kim SG, Knösche TR. On the Perceptual Subprocess of Absolute Pitch. Front Neurosci 2017; 11:557. [PMID: 29085275 PMCID: PMC5649255 DOI: 10.3389/fnins.2017.00557] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2017] [Accepted: 09/22/2017] [Indexed: 11/17/2022] Open
Abstract
Absolute pitch (AP) is the rare ability of musicians to identify the pitch of tonal sound without external reference. While there have been behavioral and neuroimaging studies on the characteristics of AP, how the AP is implemented in human brains remains largely unknown. AP can be viewed as comprising of two subprocesses: perceptual (processing auditory input to extract a pitch chroma) and associative (linking an auditory representation of pitch chroma with a verbal/non-verbal label). In this review, we focus on the nature of the perceptual subprocess of AP. Two different models on how the perceptual subprocess works have been proposed: either via absolute pitch categorization (APC) or based on absolute pitch memory (APM). A major distinction between the two views is that whether the AP uses unique auditory processing (i.e., APC) that exists only in musicians with AP or it is rooted in a common phenomenon (i.e., APM), only with heightened efficiency. We review relevant behavioral and neuroimaging evidence that supports each notion. Lastly, we list open questions and potential ideas to address them.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Research Group for MEG and EEG-Cortical Networks and Cognitive Functions, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Thomas R Knösche
- Research Group for MEG and EEG-Cortical Networks and Cognitive Functions, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
59
|
Brefczynski-Lewis JA, Lewis JW. Auditory object perception: A neurobiological model and prospective review. Neuropsychologia 2017; 105:223-242. [PMID: 28467888 PMCID: PMC5662485 DOI: 10.1016/j.neuropsychologia.2017.04.034] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2016] [Revised: 04/27/2017] [Accepted: 04/27/2017] [Indexed: 12/15/2022]
Abstract
Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects". This review presents a simple fundamental neurobiological model of hearing perception at a category level that incorporates principles of bottom-up signal processing together with top-down constraints of grounded cognition theories of knowledge representation. Though mostly derived from human neuroimaging literature, this theoretical framework highlights rudimentary principles of real-world sound processing that may apply to most if not all mammalian species with hearing and acoustic communication abilities. The model encompasses three basic categories of sound-source: (1) action sounds (non-vocalizations) produced by 'living things', with human (conspecific) and non-human animal sources representing two subcategories; (2) action sounds produced by 'non-living things', including environmental sources and human-made machinery; and (3) vocalizations ('living things'), with human versus non-human animals as two subcategories therein. The model is presented in the context of cognitive architectures relating to multisensory, sensory-motor, and spoken language organizations. The models' predictive values are further discussed in the context of anthropological theories of oral communication evolution and the neurodevelopment of spoken language proto-networks in infants/toddlers. These phylogenetic and ontogenetic frameworks both entail cortical network maturations that are proposed to at least in part be organized around a number of universal acoustic-semantic signal attributes of natural sounds, which are addressed herein.
Collapse
Affiliation(s)
- Julie A Brefczynski-Lewis
- Blanchette Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA; Department of Physiology, Pharmacology, & Neuroscience, West Virginia University, PO Box 9229, Morgantown, WV 26506, USA
| | - James W Lewis
- Blanchette Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA; Department of Physiology, Pharmacology, & Neuroscience, West Virginia University, PO Box 9229, Morgantown, WV 26506, USA.
| |
Collapse
|
60
|
For Better or Worse: The Effect of Prismatic Adaptation on Auditory Neglect. Neural Plast 2017; 2017:8721240. [PMID: 29138699 PMCID: PMC5613466 DOI: 10.1155/2017/8721240] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2017] [Accepted: 08/08/2017] [Indexed: 12/01/2022] Open
Abstract
Patients with auditory neglect attend less to auditory stimuli on their left and/or make systematic directional errors when indicating sound positions. Rightward prismatic adaptation (R-PA) was repeatedly shown to alleviate symptoms of visuospatial neglect and once to restore partially spatial bias in dichotic listening. It is currently unknown whether R-PA affects only this ear-related symptom or also other aspects of auditory neglect. We have investigated the effect of R-PA on left ear extinction in dichotic listening, space-related inattention assessed by diotic listening, and directional errors in auditory localization in patients with auditory neglect. The most striking effect of R-PA was the alleviation of left ear extinction in dichotic listening, which occurred in half of the patients with initial deficit. In contrast to nonresponders, their lesions spared the right dorsal attentional system and posterior temporal cortex. The beneficial effect of R-PA on an ear-related performance contrasted with detrimental effects on diotic listening and auditory localization. The former can be parsimoniously explained by the SHD-VAS model (shift in hemispheric dominance within the ventral attentional system; Clarke and Crottaz-Herbette 2016), which is based on the R-PA-induced shift of the right-dominant ventral attentional system to the left hemisphere. The negative effects in space-related tasks may be due to the complex nature of auditory space encoding at a cortical level.
Collapse
|
61
|
The role of auditory cortex in the spatial ventriloquism aftereffect. Neuroimage 2017; 162:257-268. [PMID: 28889003 DOI: 10.1016/j.neuroimage.2017.09.002] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2017] [Revised: 08/15/2017] [Accepted: 09/01/2017] [Indexed: 11/21/2022] Open
Abstract
Cross-modal recalibration allows the brain to maintain coherent sensory representations of the world. Using functional magnetic resonance imaging (fMRI), the present study aimed at identifying the neural mechanisms underlying recalibration in an audiovisual ventriloquism aftereffect paradigm. Participants performed a unimodal sound localization task, before and after they were exposed to adaptation blocks, in which sounds were paired with spatially disparate visual stimuli offset by 14° to the right. Behavioral results showed a significant rightward shift in sound localization following adaptation, indicating a ventriloquism aftereffect. Regarding fMRI results, left and right planum temporale (lPT/rPT) were found to respond more to contralateral sounds than to central sounds at pretest. Contrasting posttest with pretest blocks revealed significantly enhanced fMRI-signals in space-sensitive lPT after adaptation, matching the behavioral rightward shift in sound localization. Moreover, a region-of-interest analysis in lPT/rPT revealed that the lPT activity correlated positively with the localization shift for right-side sounds, whereas rPT activity correlated negatively with the localization shift for left-side and central sounds. Finally, using functional connectivity analysis, we observed enhanced coupling of the lPT with left and right inferior parietal areas as well as left motor regions following adaptation and a decoupling of lPT/rPT with contralateral auditory cortex, which scaled with participants' degree of adaptation. Together, the fMRI results suggest that cross-modal spatial recalibration is accomplished by an adjustment of unisensory representations in low-level auditory cortex. Such persistent adjustments of low-level sensory representations seem to be mediated by the interplay with higher-level spatial representations in parietal cortex.
Collapse
|
62
|
Kopel R, Emmert K, Scharnowski F, Haller S, Van De Ville D. Distributed Patterns of Brain Activity Underlying Real-Time fMRI Neurofeedback Training. IEEE Trans Biomed Eng 2017; 64:1228-1237. [DOI: 10.1109/tbme.2016.2598818] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
63
|
Sensory neural pathways revisited to unravel the temporal dynamics of the Simon effect: A model-based cognitive neuroscience approach. Neurosci Biobehav Rev 2017; 77:48-57. [DOI: 10.1016/j.neubiorev.2017.02.023] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2016] [Revised: 01/23/2017] [Accepted: 02/22/2017] [Indexed: 10/20/2022]
|
64
|
Neural correlates of distraction and conflict resolution for nonverbal auditory events. Sci Rep 2017; 7:1595. [PMID: 28487563 PMCID: PMC5431653 DOI: 10.1038/s41598-017-00811-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2016] [Accepted: 03/16/2017] [Indexed: 11/16/2022] Open
Abstract
In everyday situations auditory selective attention requires listeners to suppress task-irrelevant stimuli and to resolve conflicting information in order to make appropriate goal-directed decisions. Traditionally, these two processes (i.e. distractor suppression and conflict resolution) have been studied separately. In the present study we measured neuroelectric activity while participants performed a new paradigm in which both processes are quantified. In separate block of trials, participants indicate whether two sequential tones share the same pitch or location depending on the block’s instruction. For the distraction measure, a positive component peaking at ~250 ms was found – a distraction positivity. Brain electrical source analysis of this component suggests different generators when listeners attended to frequency and location, with the distraction by location more posterior than the distraction by frequency, providing support for the dual-pathway theory. For the conflict resolution measure, a negative frontocentral component (270–450 ms) was found, which showed similarities with that of prior studies on auditory and visual conflict resolution tasks. The timing and distribution are consistent with two distinct neural processes with suppression of task-irrelevant information occurring before conflict resolution. This new paradigm may prove useful in clinical populations to assess impairments in filtering out task-irrelevant information and/or resolving conflicting information.
Collapse
|
65
|
Kim SG, Knösche TR. Resting state functional connectivity of the ventral auditory pathway in musicians with absolute pitch. Hum Brain Mapp 2017; 38:3899-3916. [PMID: 28481006 DOI: 10.1002/hbm.23637] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2017] [Revised: 04/06/2017] [Accepted: 04/23/2017] [Indexed: 11/09/2022] Open
Abstract
Absolute pitch (AP) is the ability to recognize pitch chroma of tonal sound without external references, providing a unique model of the human auditory system (Zatorre: Nat Neurosci 6 () 692-695). In a previous study (Kim and Knösche: Hum Brain Mapp () 3486-3501), we identified enhanced intracortical myelination in the right planum polare (PP) in musicians with AP, which could be a potential site for perceptional processing of pitch chroma information. We speculated that this area, which initiates the ventral auditory pathway, might be crucially involved in the perceptual stage of the AP process in the context of the "dual pathway hypothesis" that suggests the role of the ventral pathway in processing nonspatial information related to the identity of an auditory object (Rauschecker: Eur J Neurosci 41 () 579-585). To test our conjecture on the ventral pathway, we investigated resting state functional connectivity (RSFC) using functional magnetic resonance imaging (fMRI) from musicians with varying degrees of AP. Should our hypothesis be correct, RSFC via the ventral pathway is expected to be stronger in musicians with AP, whereas such group effect is not predicted in the RSFC via the dorsal pathway. In the current data, we found greater RSFC between the right PP and bilateral anteroventral auditory cortices in musicians with AP. In contrast, we did not find any group difference in the RSFC of the planum temporale (PT) between musicians with and without AP. We believe that these findings support our conjecture on the critical role of the ventral pathway in AP recognition. Hum Brain Mapp 38:3899-3916, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Research Group for MEG and EEG - Cortical Networks and Cognitive Functions, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Thomas R Knösche
- Research Group for MEG and EEG - Cortical Networks and Cognitive Functions, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
66
|
Lappe C, Bodeck S, Lappe M, Pantev C. Shared Neural Mechanisms for the Prediction of Own and Partner Musical Sequences after Short-term Piano Duet Training. Front Neurosci 2017; 11:165. [PMID: 28420951 PMCID: PMC5378800 DOI: 10.3389/fnins.2017.00165] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2016] [Accepted: 03/13/2017] [Indexed: 11/22/2022] Open
Abstract
Predictive mechanisms in the human brain can be investigated using markers for prediction violations like the mismatch negativity (MMN). Short-term piano training increases the MMN for melodic and rhythmic deviations in the training material. This increase occurs only when the material is actually played, not when it is only perceived through listening, suggesting that learning predictions about upcoming musical events are derived from motor involvement. However, music is often performed in concert with others. In this case, predictions about upcoming actions from a partner are a crucial part of the performance. In the present experiment, we use magnetoencephalography (MEG) to measure MMNs to deviations in one's own and a partner's musical material after both engaged in musical duet training. Event-related field (ERF) results revealed that the MMN increased significantly for own and partner material suggesting a neural representation of the partner's part in a duet situation. Source analysis using beamforming revealed common activations in auditory, inferior frontal, and parietal areas, similar to previous results for single players, but also a pronounced contribution from the cerebellum. In addition, activation of the precuneus and the medial frontal cortex was observed, presumably related to the need to distinguish between own and partner material.
Collapse
Affiliation(s)
- Claudia Lappe
- Department of Medicine, Institute for Biomagnetism and Biosignalanalysis, University of MünsterMünster, Germany
| | - Sabine Bodeck
- Department of Medicine, Institute for Biomagnetism and Biosignalanalysis, University of MünsterMünster, Germany
| | - Markus Lappe
- Department of Psychology, University of MünsterMünster, Germany
| | - Christo Pantev
- Department of Medicine, Institute for Biomagnetism and Biosignalanalysis, University of MünsterMünster, Germany
| |
Collapse
|
67
|
Salo E, Salmela V, Salmi J, Numminen J, Alho K. Brain activity associated with selective attention, divided attention and distraction. Brain Res 2017; 1664:25-36. [PMID: 28363436 DOI: 10.1016/j.brainres.2017.03.021] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2016] [Revised: 02/21/2017] [Accepted: 03/22/2017] [Indexed: 11/16/2022]
Abstract
Top-down controlled selective or divided attention to sounds and visual objects, as well as bottom-up triggered attention to auditory and visual distractors, has been widely investigated. However, no study has systematically compared brain activations related to all these types of attention. To this end, we used functional magnetic resonance imaging (fMRI) to measure brain activity in participants performing a tone pitch or a foveal grating orientation discrimination task, or both, distracted by novel sounds not sharing frequencies with the tones or by extrafoveal visual textures. To force focusing of attention to tones or gratings, or both, task difficulty was kept constantly high with an adaptive staircase method. A whole brain analysis of variance (ANOVA) revealed fronto-parietal attention networks for both selective auditory and visual attention. A subsequent conjunction analysis indicated partial overlaps of these networks. However, like some previous studies, the present results also suggest segregation of prefrontal areas involved in the control of auditory and visual attention. The ANOVA also suggested, and another conjunction analysis confirmed, an additional activity enhancement in the left middle frontal gyrus related to divided attention supporting the role of this area in top-down integration of dual task performance. Distractors expectedly disrupted task performance. However, contrary to our expectations, activations specifically related to the distractors were found only in the auditory and visual cortices. This suggests gating of the distractors from further processing perhaps due to strictly focused attention in the current demanding discrimination tasks.
Collapse
Affiliation(s)
- Emma Salo
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto Neuroimaging, Aalto University School of Science and Technology, Espoo, Finland.
| | - Viljami Salmela
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto Neuroimaging, Aalto University School of Science and Technology, Espoo, Finland
| | - Juha Salmi
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto Neuroimaging, Aalto University School of Science and Technology, Espoo, Finland; Faculty of Arts, Psychology and Theology, Åbo Akademi University, Turku, Finland
| | - Jussi Numminen
- Helsinki Medical Imaging Centre, Helsinki University Hospital, Helsinki, Finland
| | - Kimmo Alho
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto Neuroimaging, Aalto University School of Science and Technology, Espoo, Finland
| |
Collapse
|
68
|
McLachlan NM, Wilson SJ. The Contribution of Brainstem and Cerebellar Pathways to Auditory Recognition. Front Psychol 2017; 8:265. [PMID: 28373850 PMCID: PMC5357638 DOI: 10.3389/fpsyg.2017.00265] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2016] [Accepted: 02/10/2017] [Indexed: 12/02/2022] Open
Abstract
The cerebellum has been known to play an important role in motor functions for many years. More recently its role has been expanded to include a range of cognitive and sensory-motor processes, and substantial neuroimaging and clinical evidence now points to cerebellar involvement in most auditory processing tasks. In particular, an increase in the size of the cerebellum over recent human evolution has been attributed in part to the development of speech. Despite this, the auditory cognition literature has largely overlooked afferent auditory connections to the cerebellum that have been implicated in acoustically conditioned reflexes in animals, and could subserve speech and other auditory processing in humans. This review expands our understanding of auditory processing by incorporating cerebellar pathways into the anatomy and functions of the human auditory system. We reason that plasticity in the cerebellar pathways underpins implicit learning of spectrotemporal information necessary for sound and speech recognition. Once learnt, this information automatically recognizes incoming auditory signals and predicts likely subsequent information based on previous experience. Since sound recognition processes involving the brainstem and cerebellum initiate early in auditory processing, learnt information stored in cerebellar memory templates could then support a range of auditory processing functions such as streaming, habituation, the integration of auditory feature information such as pitch, and the recognition of vocal communications.
Collapse
Affiliation(s)
- Neil M. McLachlan
- Melbourne School of Psychological Sciences, University of MelbourneMelbourne, VIC, Australia
| | | |
Collapse
|
69
|
Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation. eNeuro 2017; 4:eN-NWR-0007-17. [PMID: 28451630 PMCID: PMC5394928 DOI: 10.1523/eneuro.0007-17.2017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2016] [Revised: 02/03/2017] [Accepted: 02/06/2017] [Indexed: 11/21/2022] Open
Abstract
Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals.
Collapse
|
70
|
Dimitrijevic A, Smith ML, Kadis DS, Moore DR. Cortical Alpha Oscillations Predict Speech Intelligibility. Front Hum Neurosci 2017; 11:88. [PMID: 28286478 PMCID: PMC5323373 DOI: 10.3389/fnhum.2017.00088] [Citation(s) in RCA: 53] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2016] [Accepted: 02/13/2017] [Indexed: 12/21/2022] Open
Abstract
Understanding speech in noise (SiN) is a complex task involving sensory encoding and cognitive resources including working memory and attention. Previous work has shown that brain oscillations, particularly alpha rhythms (8–12 Hz) play important roles in sensory processes involving working memory and attention. However, no previous study has examined brain oscillations during performance of a continuous speech perception test. The aim of this study was to measure cortical alpha during attentive listening in a commonly used SiN task (digits-in-noise, DiN) to better understand the neural processes associated with “top-down” cognitive processing in adverse listening environments. We recruited 14 normal hearing (NH) young adults. DiN speech reception threshold (SRT) was measured in an initial behavioral experiment. EEG activity was then collected: (i) while performing the DiN near SRT; and (ii) while attending to a silent, close-caption video during presentation of identical digit stimuli that the participant was instructed to ignore. Three main results were obtained: (1) during attentive (“active”) listening to the DiN, a number of distinct neural oscillations were observed (mainly alpha with some beta; 15–30 Hz). No oscillations were observed during attention to the video (“passive” listening); (2) overall, alpha event-related synchronization (ERS) of central/parietal sources were observed during active listening when data were grand averaged across all participants. In some participants, a smaller magnitude alpha event-related desynchronization (ERD), originating in temporal regions, was observed; and (3) when individual EEG trials were sorted according to correct and incorrect digit identification, the temporal alpha ERD was consistently greater on correctly identified trials. No such consistency was observed with the central/parietal alpha ERS. These data demonstrate that changes in alpha activity are specific to listening conditions. To our knowledge, this is the first report that shows almost no brain oscillatory changes during a passive task compared to an active task in any sensory modality. Temporal alpha ERD was related to correct digit identification.
Collapse
Affiliation(s)
- Andrew Dimitrijevic
- Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences CentreToronto, ON, Canada; Hurvitz Brain Sciences, Evaluative Clinical Sciences, Sunnybrook Research InstituteToronto, ON, Canada; Faculty of Medicine, Otolaryngology-Head and Neck SurgeryUniversity of Toronto, Toronto, ON, Canada
| | - Michael L Smith
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical CenterCincinnati, OH, USA; Speech and Hearing Sciences, University of WashingtonSeattle, WA, USA
| | - Darren S Kadis
- Pediatric Neuroimaging Research Consortium, Cincinnati Children's Hospital Medical CenterCincinnati, OH, USA; Division of Neurology, Cincinnati Children's Hospital Medical CenterCincinnati, OH, USA; Department of Pediatrics, University of Cincinnati, College of MedicineCincinnati, OH, USA
| | - David R Moore
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical CenterCincinnati, OH, USA; Department of Otolaryngology, University of CincinnatiCincinnati, OH, USA
| |
Collapse
|
71
|
Bednar A, Boland FM, Lalor EC. Different spatio-temporal electroencephalography features drive the successful decoding of binaural and monaural cues for sound localization. Eur J Neurosci 2017; 45:679-689. [DOI: 10.1111/ejn.13524] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2016] [Revised: 01/10/2017] [Accepted: 01/13/2017] [Indexed: 11/27/2022]
Affiliation(s)
- Adam Bednar
- School of Engineering; Trinity Centre for Bioengineering and Trinity College Institute of Neuroscience; Trinity College Dublin; University of Dublin; Dublin Ireland
- Department of Biomedical Engineering and Department of Neuroscience; University of Rochester; 500 Joseph C. Wilson Blvd. Box 270168 Rochester, NY 14611 USA
| | - Francis M. Boland
- School of Engineering; Electronic & Electrical Engineering; Trinity College Dublin; Dublin Ireland
| | - Edmund C. Lalor
- School of Engineering; Trinity Centre for Bioengineering and Trinity College Institute of Neuroscience; Trinity College Dublin; University of Dublin; Dublin Ireland
- Department of Biomedical Engineering and Department of Neuroscience; University of Rochester; 500 Joseph C. Wilson Blvd. Box 270168 Rochester, NY 14611 USA
| |
Collapse
|
72
|
Araneda R, Renier L, Ebner-Karestinos D, Dricot L, De Volder AG. Hearing, feeling or seeing a beat recruits a supramodal network in the auditory dorsal stream. Eur J Neurosci 2016; 45:1439-1450. [PMID: 27471102 DOI: 10.1111/ejn.13349] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2015] [Revised: 06/13/2016] [Accepted: 07/23/2016] [Indexed: 10/21/2022]
Abstract
Hearing a beat recruits a wide neural network that involves the auditory cortex and motor planning regions. Perceiving a beat can potentially be achieved via vision or even touch, but it is currently not clear whether a common neural network underlies beat processing. Here, we used functional magnetic resonance imaging (fMRI) to test to what extent the neural network involved in beat processing is supramodal, that is, is the same in the different sensory modalities. Brain activity changes in 27 healthy volunteers were monitored while they were attending to the same rhythmic sequences (with and without a beat) in audition, vision and the vibrotactile modality. We found a common neural network for beat detection in the three modalities that involved parts of the auditory dorsal pathway. Within this network, only the putamen and the supplementary motor area (SMA) showed specificity to the beat, while the brain activity in the putamen covariated with the beat detection speed. These results highlighted the implication of the auditory dorsal stream in beat detection, confirmed the important role played by the putamen in beat detection and indicated that the neural network for beat detection is mostly supramodal. This constitutes a new example of convergence of the same functional attributes into one centralized representation in the brain.
Collapse
Affiliation(s)
- Rodrigo Araneda
- Université catholique de Louvain, 54 Avenue Hippocrate UCL B1.54.09, 1200, Brussels, Belgium
| | - Laurent Renier
- Université catholique de Louvain, 54 Avenue Hippocrate UCL B1.54.09, 1200, Brussels, Belgium
| | | | - Laurence Dricot
- Université catholique de Louvain, 54 Avenue Hippocrate UCL B1.54.09, 1200, Brussels, Belgium
| | - Anne G De Volder
- Université catholique de Louvain, 54 Avenue Hippocrate UCL B1.54.09, 1200, Brussels, Belgium
| |
Collapse
|
73
|
Brosowsky NP, Mondor TA. Multistable perception of ambiguous melodies and the role of musical expertise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:866. [PMID: 27586718 DOI: 10.1121/1.4960450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Whereas visual demonstrations of multistability are ubiquitous, there are few auditory examples. The purpose of the current study was to determine whether simultaneously presented melodies, such as underlie the scale illusion [Deutsch (1975). J. Acoust. Soc. Am. 57(5), 1156-1160], can elicit multiple mutually exclusive percepts, and whether reported perceptions are mediated by musical expertise. Participants listened to target melodies and reported whether the target was embedded in subsequent test melodies. Target sequences were created such that they would only be heard if the listener interpreted the test melody according to various perceptual cues. Critically, and in contrast with previous examinations of the scale illusion, an objective measure of target detection was obtained by including target-absent test melodies. As a result, listeners could reliably identify target sequences from different perceptual organizations when presented with the same test melody on different trials. This result demonstrates an ability to alternate between mutually exclusive percepts of an unchanged stimulus. However, only perceptual organizations consistent with frequency and spatial cues were available and musical expertise did mediate target detection, limiting the organizations available to non-musicians. The current study provides the first known demonstration of auditory multistability using simultaneously presented melodies and provides a unique experimental method for measuring auditory perceptual competition.
Collapse
Affiliation(s)
- Nicholaus P Brosowsky
- Department of Psychology, The Graduate Center of the City University of New York, 365 5th Avenue, New York, New York 10016, USA
| | - Todd A Mondor
- University of Manitoba, Winnipeg, Manitoba, R3T 2N2, Canada
| |
Collapse
|
74
|
Sood MR, Sereno MI. Areas activated during naturalistic reading comprehension overlap topological visual, auditory, and somatotomotor maps. Hum Brain Mapp 2016; 37:2784-810. [PMID: 27061771 PMCID: PMC4949687 DOI: 10.1002/hbm.23208] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2015] [Revised: 03/09/2016] [Accepted: 03/24/2016] [Indexed: 11/18/2022] Open
Abstract
Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor-preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface-based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory-motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory-motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M-I. Hum Brain Mapp 37:2784-2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Mariam R. Sood
- Department of Psychological SciencesBirkbeck, University of London Malet StreetLondonWC1E 7HXUnited Kingdom
| | - Martin I. Sereno
- Department of Psychological SciencesBirkbeck, University of London Malet StreetLondonWC1E 7HXUnited Kingdom
- Experimental Psychology, Division of Psychology and Language Sciences 26 Bedford WayLondonWC1H 0APUnited Kingdom
| |
Collapse
|
75
|
Sathian K. Analysis of haptic information in the cerebral cortex. J Neurophysiol 2016; 116:1795-1806. [PMID: 27440247 DOI: 10.1152/jn.00546.2015] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2015] [Accepted: 07/20/2016] [Indexed: 11/22/2022] Open
Abstract
Haptic sensing of objects acquires information about a number of properties. This review summarizes current understanding about how these properties are processed in the cerebral cortex of macaques and humans. Nonnoxious somatosensory inputs, after initial processing in primary somatosensory cortex, are partially segregated into different pathways. A ventrally directed pathway carries information about surface texture into parietal opercular cortex and thence to medial occipital cortex. A dorsally directed pathway transmits information regarding the location of features on objects to the intraparietal sulcus and frontal eye fields. Shape processing occurs mainly in the intraparietal sulcus and lateral occipital complex, while orientation processing is distributed across primary somatosensory cortex, the parietal operculum, the anterior intraparietal sulcus, and a parieto-occipital region. For each of these properties, the respective areas outside primary somatosensory cortex also process corresponding visual information and are thus multisensory. Consistent with the distributed neural processing of haptic object properties, tactile spatial acuity depends on interaction between bottom-up tactile inputs and top-down attentional signals in a distributed neural network. Future work should clarify the roles of the various brain regions and how they interact at the network level.
Collapse
Affiliation(s)
- K Sathian
- Departments of Neurology, Rehabilitation Medicine and Psychology, Emory University, Atlanta, Georgia; and Center for Visual and Neurocognitive Rehabilitation, Atlanta Department of Veterans Affairs Medical Center, Decatur, Georgia
| |
Collapse
|
76
|
Zimmer U, Höfler M, Koschutnig K, Ischebeck A. Neuronal interactions in areas of spatial attention reflect avoidance of disgust, but orienting to danger. Neuroimage 2016; 134:94-104. [PMID: 27039145 DOI: 10.1016/j.neuroimage.2016.03.050] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2015] [Revised: 03/14/2016] [Accepted: 03/20/2016] [Indexed: 12/26/2022] Open
Abstract
For survival, it is necessary to attend quickly towards dangerous objects, but to turn away from something that is disgusting. We tested whether fear and disgust sounds direct spatial attention differently. Using fMRI, a sound cue (disgust, fear or neutral) was presented to the left or right ear. The cue was followed by a visual target (a small arrow) which was located on the same (valid) or opposite (invalid) side as the cue. Participants were required to decide whether the arrow pointed up- or downwards while ignoring the sound cue. Behaviorally, responses were faster for invalid compared to valid targets when cued by disgust, whereas the opposite pattern was observed for targets after fearful and neutral sound cues. During target presentation, activity in the visual cortex and IPL increased for targets invalidly cued with disgust, but for targets validly cued with fear which indicated a general modulation of activation due to attention. For the TPJ, an interaction in the opposite direction was observed, consistent with its role in detecting targets at unattended positions and in relocating attention. As a whole our results indicate that a disgusting sound directs spatial attention away from its location, in contrast to fearful and neutral sounds.
Collapse
Affiliation(s)
- Ulrike Zimmer
- Department of Psychology, University of Graz, Austria; Biotechmed Graz, Austria.
| | - Margit Höfler
- Department of Psychology, University of Graz, Austria
| | - Karl Koschutnig
- Department of Psychology, University of Graz, Austria; Biotechmed Graz, Austria
| | - Anja Ischebeck
- Department of Psychology, University of Graz, Austria; Biotechmed Graz, Austria
| |
Collapse
|
77
|
Lewald J, Hanenberg C, Getzmann S. Brain correlates of the orientation of auditory spatial attention onto speaker location in a “cocktail-party” situation. Psychophysiology 2016; 53:1484-95. [DOI: 10.1111/psyp.12692] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2015] [Accepted: 05/24/2016] [Indexed: 11/29/2022]
Affiliation(s)
- Jörg Lewald
- Department of Cognitive Psychology, Faculty of Psychology; Ruhr University Bochum; Bochum Germany
- Leibniz Research Centre for Working Environment and Human Factors; Dortmund Germany
| | - Christina Hanenberg
- Department of Cognitive Psychology, Faculty of Psychology; Ruhr University Bochum; Bochum Germany
- Leibniz Research Centre for Working Environment and Human Factors; Dortmund Germany
| | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors; Dortmund Germany
| |
Collapse
|
78
|
Kim SG, Knösche TR. Intracortical myelination in musicians with absolute pitch: Quantitative morphometry using 7-T MRI. Hum Brain Mapp 2016; 37:3486-501. [PMID: 27160707 PMCID: PMC5084814 DOI: 10.1002/hbm.23254] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2016] [Revised: 04/26/2016] [Accepted: 04/27/2016] [Indexed: 11/06/2022] Open
Abstract
Absolute pitch (AP) is known as the ability to recognize and label the pitch chroma of a given tone without external reference. Known brain structures and functions related to AP are mainly of macroscopic aspects. To shed light on the underlying neural mechanism of AP, we investigated the intracortical myeloarchitecture in musicians with and without AP using the quantitative mapping of the longitudinal relaxation rates with ultra‐high‐field magnetic resonance imaging at 7 T. We found greater intracortical myelination for AP musicians in the anterior region of the supratemporal plane, particularly the medial region of the right planum polare (PP). In the same region of the right PP, we also found a positive correlation with a behavioral index of AP performance. In addition, we found a positive correlation with a frequency discrimination threshold in the anterolateral Heschl's gyrus in the right hemisphere, demonstrating distinctive neural processes of absolute recognition and relative discrimination of pitch. Regarding possible effects of local myelination in the cortex and the known importance of the anterior superior temporal gyrus/sulcus for the identification of auditory objects, we argue that pitch chroma may be processed as an identifiable object property in AP musicians. Hum Brain Mapp 37:3486–3501, 2016. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Research Group for MEG and EEG-Cortical Networks and Cognitive Functions, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Thomas R Knösche
- Research Group for MEG and EEG-Cortical Networks and Cognitive Functions, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
79
|
Eliminating dual-task costs by minimizing crosstalk between tasks: The role of modality and feature pairings. Cognition 2016; 150:92-108. [DOI: 10.1016/j.cognition.2016.02.003] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2014] [Revised: 02/03/2016] [Accepted: 02/04/2016] [Indexed: 11/23/2022]
|
80
|
Lewald J. Modulation of human auditory spatial scene analysis by transcranial direct current stimulation. Neuropsychologia 2016; 84:282-93. [PMID: 26825012 DOI: 10.1016/j.neuropsychologia.2016.01.030] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2015] [Revised: 01/24/2016] [Accepted: 01/25/2016] [Indexed: 10/22/2022]
Abstract
Localizing and selectively attending to the source of a sound of interest in a complex auditory environment is an important capacity of the human auditory system. The underlying neural mechanisms have, however, still not been clarified in detail. This issue was addressed by using bilateral bipolar-balanced transcranial direct current stimulation (tDCS) in combination with a task demanding free-field sound localization in the presence of multiple sound sources, thus providing a realistic simulation of the so-called "cocktail-party" situation. With left-anode/right-cathode, but not with right-anode/left-cathode, montage of bilateral electrodes, tDCS over superior temporal gyrus, including planum temporale and auditory cortices, was found to improve the accuracy of target localization in left hemispace. No effects were found for tDCS over inferior parietal lobule or with off-target active stimulation over somatosensory-motor cortex that was used to control for non-specific effects. Also, the absolute error in localization remained unaffected by tDCS, thus suggesting that general response precision was not modulated by brain polarization. This finding can be explained in the framework of a model assuming that brain polarization modulated the suppression of irrelevant sound sources, thus resulting in more effective spatial separation of the target from the interfering sound in the complex auditory scene.
Collapse
Affiliation(s)
- Jörg Lewald
- Auditory Cognitive Neuroscience Laboratory, Department of Cognitive Psychology, Ruhr University Bochum, D-44780 Bochum, Germany; Leibniz Research Centre for Working Environment and Human Factors, Ardeystraße 67, D-44139 Dortmund, Germany.
| |
Collapse
|
81
|
Lappe C, Lappe M, Pantev C. Differential processing of melodic, rhythmic and simple tone deviations in musicians -an MEG study. Neuroimage 2016; 124:898-905. [DOI: 10.1016/j.neuroimage.2015.09.059] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2015] [Revised: 09/20/2015] [Accepted: 09/29/2015] [Indexed: 01/08/2023] Open
|
82
|
Functional neuroanatomy of spatial sound processing in Alzheimer's disease. Neurobiol Aging 2015; 39:154-64. [PMID: 26923412 PMCID: PMC4782736 DOI: 10.1016/j.neurobiolaging.2015.12.006] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2015] [Revised: 12/08/2015] [Accepted: 12/15/2015] [Indexed: 12/23/2022]
Abstract
Deficits of auditory scene analysis accompany Alzheimer's disease (AD). However, the functional neuroanatomy of spatial sound processing has not been defined in AD. We addressed this using a “sparse” fMRI virtual auditory spatial paradigm in 14 patients with typical AD in relation to 16 healthy age-matched individuals. Sound stimulus sequences discretely varied perceived spatial location and pitch of the sound source in a factorial design. AD was associated with loss of differentiated cortical profiles of auditory location and pitch processing at the prescribed threshold, and significant group differences were identified for processing auditory spatial variation in posterior cingulate cortex (controls > AD) and the interaction of pitch and spatial variation in posterior insula (AD > controls). These findings build on emerging evidence for altered brain mechanisms of auditory scene analysis and suggest complex dysfunction of network hubs governing the interface of internal milieu and external environment in AD. Auditory spatial processing may be a sensitive probe of this interface and contribute to characterization of brain network failure in AD and other neurodegenerative syndromes.
Collapse
|
83
|
Peters B, Bledowski C, Rieder M, Kaiser J. Recurrence of task set-related MEG signal patterns during auditory working memory. Brain Res 2015; 1640:232-42. [PMID: 26683086 DOI: 10.1016/j.brainres.2015.12.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2015] [Revised: 11/19/2015] [Accepted: 12/05/2015] [Indexed: 11/30/2022]
Abstract
Processing of auditory spatial and non-spatial information in working memory has been shown to rely on separate cortical systems. While previous studies have demonstrated differences in spatial versus non-spatial processing from the encoding of to-be-remembered stimuli onwards, here we investigated whether such differences would be detectable already prior to presentation of the sample stimulus. We analyzed broad-band magnetoencephalography data from 15 healthy adults during an auditory working memory paradigm starting with a visual cue indicating the task-relevant stimulus feature for a given trial (lateralization or pitch) and a subsequent 1.5-s pre-encoding phase. This was followed by a sample sound (0.2s), the delay phase (0.8s) and a test stimulus (0.2s) after which participants made a match/non-match decision. Linear discriminant functions were trained to decode task-specific signal patterns throughout the task, and temporal generalization was used to assess whether the neural codes discriminating between the tasks during the pre-encoding phase would recur during later task periods. The spatial versus non-spatial tasks could indeed be discriminated after the onset of the cue onwards, and decoders trained during the pre-encoding phase successfully discriminated the tasks during both sample stimulus encoding and during the delay phase. This demonstrates that task-specific neural codes are established already before the memorandum is presented and that the same patterns are reestablished during stimulus encoding and maintenance. This article is part of a Special Issue entitled SI: Auditory working memory.
Collapse
Affiliation(s)
- Benjamin Peters
- Institute of Medical Psychology, Goethe University, Heinrich-Hoffmann-Str.10, 60528 Frankfurt am Main, Germany.
| | - Christoph Bledowski
- Institute of Medical Psychology, Goethe University, Heinrich-Hoffmann-Str.10, 60528 Frankfurt am Main, Germany
| | - Maria Rieder
- Institute of Medical Psychology, Goethe University, Heinrich-Hoffmann-Str.10, 60528 Frankfurt am Main, Germany
| | - Jochen Kaiser
- Institute of Medical Psychology, Goethe University, Heinrich-Hoffmann-Str.10, 60528 Frankfurt am Main, Germany
| |
Collapse
|
84
|
Zimmermann JF, Moscovitch M, Alain C. Attending to auditory memory. Brain Res 2015; 1640:208-21. [PMID: 26638836 DOI: 10.1016/j.brainres.2015.11.032] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2015] [Revised: 11/18/2015] [Accepted: 11/19/2015] [Indexed: 10/22/2022]
Abstract
Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory.
Collapse
Affiliation(s)
- Jacqueline F Zimmermann
- University of Toronto, Department of Psychology, Sidney Smith Hall, 100 St. George Street, Toronto, Ontario, Canada M5S 3G3; Rotman Research Institute, Baycrest Hospital, 3560 Bathurst Street, Toronto, Ontario, Canada M6A 2E1.
| | - Morris Moscovitch
- University of Toronto, Department of Psychology, Sidney Smith Hall, 100 St. George Street, Toronto, Ontario, Canada M5S 3G3; Rotman Research Institute, Baycrest Hospital, 3560 Bathurst Street, Toronto, Ontario, Canada M6A 2E1
| | - Claude Alain
- University of Toronto, Department of Psychology, Sidney Smith Hall, 100 St. George Street, Toronto, Ontario, Canada M5S 3G3; Rotman Research Institute, Baycrest Hospital, 3560 Bathurst Street, Toronto, Ontario, Canada M6A 2E1; Institute of Medical Sciences, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
85
|
Stewart HJ, Amitay S. Modality-specificity of Selective Attention Networks. Front Psychol 2015; 6:1826. [PMID: 26635709 PMCID: PMC4658445 DOI: 10.3389/fpsyg.2015.01826] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2015] [Accepted: 11/11/2015] [Indexed: 11/18/2022] Open
Abstract
Objective: To establish the modality specificity and generality of selective attention networks. Method: Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. Results: The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled “general attention.” The third component was labeled “auditory attention,” as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as “spatial orienting” and “spatial conflict,” respectively—they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task—all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). Conclusions: These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific.
Collapse
Affiliation(s)
- Hannah J Stewart
- Medical Research Council Institute of Hearing Research Nottingham, UK
| | - Sygal Amitay
- Medical Research Council Institute of Hearing Research Nottingham, UK
| |
Collapse
|
86
|
Häkkinen S, Ovaska N, Rinne T. Processing of pitch and location in human auditory cortex during visual and auditory tasks. Front Psychol 2015; 6:1678. [PMID: 26594185 PMCID: PMC4635202 DOI: 10.3389/fpsyg.2015.01678] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2015] [Accepted: 10/19/2015] [Indexed: 01/22/2023] Open
Abstract
The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed task-dependent activation modulations, but no systematic activations to pitch or location. Based on these results, we argue that activations during pitch and location tasks cannot be explained by enhanced stimulus-specific processing alone, but rather that activations in human AC depend in a complex manner on the requirements of the task at hand.
Collapse
Affiliation(s)
- Suvi Häkkinen
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Noora Ovaska
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Teemu Rinne
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Advanced Magnetic Imaging Centre, Aalto University School of Science Espoo, Finland
| |
Collapse
|
87
|
Fiehler K, Schütz I, Meller T, Thaler L. Neural Correlates of Human Echolocation of Path Direction During Walking. Multisens Res 2015; 28:195-226. [PMID: 26152058 DOI: 10.1163/22134808-00002491] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Echolocation can be used by blind and sighted humans to navigate their environment. The current study investigated the neural activity underlying processing of path direction during walking. Brain activity was measured with fMRI in three blind echolocation experts, and three blind and three sighted novices. During scanning, participants listened to binaural recordings that had been made prior to scanning while echolocation experts had echolocated during walking along a corridor which could continue to the left, right, or straight ahead. Participants also listened to control sounds that contained ambient sounds and clicks, but no echoes. The task was to decide if the corridor in the recording continued to the left, right, or straight ahead, or if they were listening to a control sound. All participants successfully dissociated echo from no echo sounds, however, echolocation experts were superior at direction detection. We found brain activations associated with processing of path direction (contrast: echo vs. no echo) in superior parietal lobule (SPL) and inferior frontal cortex in each group. In sighted novices, additional activation occurred in the inferior parietal lobule (IPL) and middle and superior frontal areas. Within the framework of the dorso-dorsal and ventro-dorsal pathway proposed by Rizzolatti and Matelli (2003), our results suggest that blind participants may automatically assign directional meaning to the echoes, while sighted participants may apply more conscious, high-level spatial processes. High similarity of SPL and IFC activations across all three groups, in combination with previous research, also suggest that all participants recruited a multimodal spatial processing system for action (here: locomotion).
Collapse
|
88
|
Zündorf IC, Lewald J, Karnath HO. Testing the dual-pathway model for auditory processing in human cortex. Neuroimage 2015; 124:672-681. [PMID: 26388552 DOI: 10.1016/j.neuroimage.2015.09.026] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2015] [Revised: 09/09/2015] [Accepted: 09/10/2015] [Indexed: 11/16/2022] Open
Abstract
Analogous to the visual system, auditory information has been proposed to be processed in two largely segregated streams: an anteroventral ("what") pathway mainly subserving sound identification and a posterodorsal ("where") stream mainly subserving sound localization. Despite the popularity of this assumption, the degree of separation of spatial and non-spatial auditory information processing in cortex is still under discussion. In the present study, a statistical approach was implemented to investigate potential behavioral dissociations for spatial and non-spatial auditory processing in stroke patients, and voxel-wise lesion analyses were used to uncover their neural correlates. The results generally provided support for anatomically and functionally segregated auditory networks. However, some degree of anatomo-functional overlap between "what" and "where" aspects of processing was found in the superior pars opercularis of right inferior frontal gyrus (Brodmann area 44), suggesting the potential existence of a shared target area of both auditory streams in this region. Moreover, beyond the typically defined posterodorsal stream (i.e., posterior superior temporal gyrus, inferior parietal lobule, and superior frontal sulcus), occipital lesions were found to be associated with sound localization deficits. These results, indicating anatomically and functionally complex cortical networks for spatial and non-spatial auditory processing, are roughly consistent with the dual-pathway model of auditory processing in its original form, but argue for the need to refine and extend this widely accepted hypothesis.
Collapse
Affiliation(s)
- Ida C Zündorf
- Center of Neurology, Division of Neuropsychology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, Bochum, Germany; Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Hans-Otto Karnath
- Center of Neurology, Division of Neuropsychology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany; Department of Psychology, University of South Carolina, Columbia, SC 29208, USA.
| |
Collapse
|
89
|
From bird to sparrow: Learning-induced modulations in fine-grained semantic discrimination. Neuroimage 2015; 118:163-73. [DOI: 10.1016/j.neuroimage.2015.05.091] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2015] [Revised: 05/02/2015] [Accepted: 05/25/2015] [Indexed: 11/23/2022] Open
|
90
|
Roaring lions and chirruping lemurs: How the brain encodes sound objects in space. Neuropsychologia 2015; 75:304-13. [DOI: 10.1016/j.neuropsychologia.2015.06.012] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2014] [Revised: 06/07/2015] [Accepted: 06/10/2015] [Indexed: 01/29/2023]
|
91
|
Lewald J, Getzmann S. Electrophysiological correlates of cocktail-party listening. Behav Brain Res 2015; 292:157-66. [PMID: 26092714 DOI: 10.1016/j.bbr.2015.06.025] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2015] [Revised: 06/12/2015] [Accepted: 06/14/2015] [Indexed: 11/19/2022]
Abstract
Detecting, localizing, and selectively attending to a particular sound source of interest in complex auditory scenes composed of multiple competing sources is a remarkable capacity of the human auditory system. The neural basis of this so-called "cocktail-party effect" has remained largely unknown. Here, we studied the cortical network engaged in solving the "cocktail-party" problem, using event-related potentials (ERPs) in combination with two tasks demanding horizontal localization of a naturalistic target sound presented either in silence or in the presence of multiple competing sound sources. Presentation of multiple sound sources, as compared to single sources, induced an increased P1 amplitude, a reduction in N1, and a strong N2 component, resulting in a pronounced negativity in the ERP difference waveform (N2d) around 260 ms after stimulus onset. About 100 ms later, the anterior contralateral N2 subcomponent (N2ac) occurred in the multiple-sources condition, as computed from the amplitude difference for targets in the left minus right hemispaces. Cortical source analyses of the ERP modulation, resulting from the contrast of multiple vs. single sources, generally revealed an initial enhancement of electrical activity in right temporo-parietal areas, including auditory cortex, by multiple sources (at P1) that is followed by a reduction, with the primary sources shifting from right inferior parietal lobule (at N1) to left dorso-frontal cortex (at N2d). Thus, cocktail-party listening, as compared to single-source localization, appears to be based on a complex chronology of successive electrical activities within a specific cortical network involved in spatial hearing in complex situations.
Collapse
Affiliation(s)
- Jörg Lewald
- Auditory Cognitive Neuroscience Laboratory, Department of Cognitive Psychology, Ruhr University Bochum, D‑44780 Bochum, Germany; Leibniz Research Centre for Working Environment and Human Factors, Ardeystraße 67, D‑44139 Dortmund, Germany.
| | - Stephan Getzmann
- Auditory Cognitive Neuroscience Laboratory, Department of Cognitive Psychology, Ruhr University Bochum, D‑44780 Bochum, Germany; Leibniz Research Centre for Working Environment and Human Factors, Ardeystraße 67, D‑44139 Dortmund, Germany
| |
Collapse
|
92
|
Salminen NH, Takanen M, Santala O, Alku P, Pulkki V. Neural realignment of spatially separated sound components. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:3356-3365. [PMID: 26093425 DOI: 10.1121/1.4921605] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Natural auditory scenes often consist of several sound sources overlapping in time, but separated in space. Yet, location is not fully exploited in auditory grouping: spatially separated sounds can get perceptually fused into a single auditory object and this leads to difficulties in the identification and localization of concurrent sounds. Here, the brain mechanisms responsible for grouping across spatial locations were explored in magnetoencephalography (MEG) recordings. The results show that the cortical representation of a vowel spatially separated into two locations reflects the perceived location of the speech sound rather than the physical locations of the individual components. In other words, the auditory scene is neurally rearranged to bring components into spatial alignment when they were deemed to belong to the same object. This renders the original spatial information unavailable at the level of the auditory cortex and may contribute to difficulties in concurrent sound segregation.
Collapse
Affiliation(s)
- Nelli H Salminen
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science, Aalto University School of Science, P.O. Box 12200, Aalto, FI-00076, Finland
| | - Marko Takanen
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, Aalto, FI-00076, Finland
| | - Olli Santala
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, Aalto, FI-00076, Finland
| | - Paavo Alku
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, Aalto, FI-00076, Finland
| | - Ville Pulkki
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, Aalto, FI-00076, Finland
| |
Collapse
|
93
|
Castro-Camacho W, Peñaloza-López Y, Pérez-Ruiz SJ, García-Pedroza F, Padilla-Ortiz AL, Poblano A, Villarruel-Rivas C, Romero-Díaz A, Careaga-Olvera A. Sound localization and word discrimination in reverberant environment in children with developmental dyslexia. ARQUIVOS DE NEURO-PSIQUIATRIA 2015; 73:314-20. [PMID: 25992522 DOI: 10.1590/0004-282x20150005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2014] [Accepted: 12/12/2014] [Indexed: 11/22/2022]
Abstract
OBJECTIVE Compare if localization of sounds and words discrimination in reverberant environment is different between children with dyslexia and controls. METHOD We studied 30 children with dyslexia and 30 controls. Sound and word localization and discrimination was studied in five angles from left to right auditory fields (-90o, -45o, 0o, +45o, +90o), under reverberant and no-reverberant conditions; correct answers were compared. RESULTS Spatial location of words in no-reverberant test was deficient in children with dyslexia at 0º and +90o. Spatial location for reverberant test was altered in children with dyslexia at all angles, except -90o. Word discrimination in no-reverberant test in children with dyslexia had a poor performance at left angles. In reverberant test, children with dyslexia exhibited deficiencies at -45o, -90o, and +45o angles. CONCLUSION Children with dyslexia could had problems when have to locate sound, and discriminate words in extreme locations of the horizontal plane in classrooms with reverberation.
Collapse
Affiliation(s)
- Wendy Castro-Camacho
- Laboratory of Central Auditory Alterations Research, National Institute of Rehabilitation, Mexico City, Mexico
| | - Yolanda Peñaloza-López
- Laboratory of Central Auditory Alterations Research, National Institute of Rehabilitation, Mexico City, Mexico
| | - Santiago J Pérez-Ruiz
- Center of Applied Sciences and Technological Development, National University of Mexico, Mexico City, Mexico
| | - Felipe García-Pedroza
- Department of Familial Medicine, School of Medicine, National University of Mexico, Mexico City, Mexico
| | - Ana L Padilla-Ortiz
- Center of Applied Sciences and Technological Development, National University of Mexico, Mexico City, Mexico
| | - Adrián Poblano
- Laboratory of Central Auditory Alterations Research, National Institute of Rehabilitation, Mexico City, Mexico
| | | | - Alfredo Romero-Díaz
- Laboratory of Central Auditory Alterations Research, National Institute of Rehabilitation, Mexico City, Mexico
| | - Aidé Careaga-Olvera
- Department of Psychology, National Institute of Rehabilitation, Mexico City, Mexico
| |
Collapse
|
94
|
Musso M, Weiller C, Horn A, Glauche V, Umarova R, Hennig J, Schneider A, Rijntjes M. A single dual-stream framework for syntactic computations in music and language. Neuroimage 2015; 117:267-83. [PMID: 25998957 DOI: 10.1016/j.neuroimage.2015.05.020] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2015] [Accepted: 05/07/2015] [Indexed: 10/23/2022] Open
Abstract
This study is the first to compare in the same subjects the specific spatial distribution and the functional and anatomical connectivity of the neuronal resources that activate and integrate syntactic representations during music and language processing. Combining functional magnetic resonance imaging with functional connectivity and diffusion tensor imaging-based probabilistic tractography, we examined the brain network involved in the recognition and integration of words and chords that were not hierarchically related to the preceding syntax; that is, those deviating from the universal principles of grammar and tonal relatedness. This kind of syntactic processing in both domains was found to rely on a shared network in the left hemisphere centered on the inferior part of the inferior frontal gyrus (IFG), including pars opercularis and pars triangularis, and on dorsal and ventral long association tracts connecting this brain area with temporo-parietal regions. Language processing utilized some adjacent left hemispheric IFG and middle temporal regions more than music processing, and music processing also involved right hemisphere regions not activated in language processing. Our data indicate that a dual-stream system with dorsal and ventral long association tracts centered on a functionally and structurally highly differentiated left IFG is pivotal for domain-general syntactic competence over a broad range of elements including words and chords.
Collapse
Affiliation(s)
- Mariacristina Musso
- Freiburg Brain Imaging, University Hospital Freiburg, Germany; Department of Neurology, University Hospital Freiburg, Germany.
| | - Cornelius Weiller
- Freiburg Brain Imaging, University Hospital Freiburg, Germany; Department of Neurology, University Hospital Freiburg, Germany
| | - Andreas Horn
- Freiburg Brain Imaging, University Hospital Freiburg, Germany; Department of Neurology, University Hospital Freiburg, Germany
| | - Volkmer Glauche
- Freiburg Brain Imaging, University Hospital Freiburg, Germany; Department of Neurology, University Hospital Freiburg, Germany
| | - Roza Umarova
- Freiburg Brain Imaging, University Hospital Freiburg, Germany; Department of Neurology, University Hospital Freiburg, Germany
| | - Jürgen Hennig
- Department of Radiology, Medical Physics, University Hospital Freiburg, Germany
| | | | - Michel Rijntjes
- Freiburg Brain Imaging, University Hospital Freiburg, Germany; Department of Neurology, University Hospital Freiburg, Germany
| |
Collapse
|
95
|
Clarke S, Bindschaedler C, Crottaz-Herbette S. Impact of Cognitive Neuroscience on Stroke Rehabilitation. Stroke 2015; 46:1408-13. [DOI: 10.1161/strokeaha.115.007435] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2015] [Accepted: 02/11/2015] [Indexed: 11/16/2022]
Affiliation(s)
- Stephanie Clarke
- From the Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Lausanne, Switzerland
| | - Claire Bindschaedler
- From the Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- From the Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Lausanne, Switzerland
| |
Collapse
|
96
|
Ortiz-Rios M, Kuśmierek P, DeWitt I, Archakov D, Azevedo FAC, Sams M, Jääskeläinen IP, Keliris GA, Rauschecker JP. Functional MRI of the vocalization-processing network in the macaque brain. Front Neurosci 2015; 9:113. [PMID: 25883546 PMCID: PMC4381638 DOI: 10.3389/fnins.2015.00113] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2014] [Accepted: 03/17/2015] [Indexed: 12/12/2022] Open
Abstract
Using functional magnetic resonance imaging in awake behaving monkeys we investigated how species-specific vocalizations are represented in auditory and auditory-related regions of the macaque brain. We found clusters of active voxels along the ascending auditory pathway that responded to various types of complex sounds: inferior colliculus (IC), medial geniculate nucleus (MGN), auditory core, belt, and parabelt cortex, and other parts of the superior temporal gyrus (STG) and sulcus (STS). Regions sensitive to monkey calls were most prevalent in the anterior STG, but some clusters were also found in frontal and parietal cortex on the basis of comparisons between responses to calls and environmental sounds. Surprisingly, we found that spectrotemporal control sounds derived from the monkey calls (“scrambled calls”) also activated the parietal and frontal regions. Taken together, our results demonstrate that species-specific vocalizations in rhesus monkeys activate preferentially the auditory ventral stream, and in particular areas of the antero-lateral belt and parabelt.
Collapse
Affiliation(s)
- Michael Ortiz-Rios
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA ; Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics Tübingen, Germany ; IMPRS for Cognitive and Systems Neuroscience Tübingen, Germany
| | - Paweł Kuśmierek
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA
| | - Iain DeWitt
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA
| | - Denis Archakov
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA ; Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science Aalto, Finland
| | - Frederico A C Azevedo
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics Tübingen, Germany ; IMPRS for Cognitive and Systems Neuroscience Tübingen, Germany
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science Aalto, Finland
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science Aalto, Finland
| | - Georgios A Keliris
- Department of Physiology of Cognitive Processes, Max Planck Institute for Biological Cybernetics Tübingen, Germany ; Bernstein Centre for Computational Neuroscience Tübingen, Germany ; Department of Biomedical Sciences, University of Antwerp Wilrijk, Belgium
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA ; Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science Aalto, Finland ; Institute for Advanced Study and Department of Neurology, Klinikum Rechts der Isar, Technische Universität München München, Germany
| |
Collapse
|
97
|
Fernandino L, Binder JR, Desai RH, Pendl SL, Humphries CJ, Gross WL, Conant LL, Seidenberg MS. Concept Representation Reflects Multimodal Abstraction: A Framework for Embodied Semantics. Cereb Cortex 2015; 26:2018-34. [PMID: 25750259 DOI: 10.1093/cercor/bhv020] [Citation(s) in RCA: 148] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Recent research indicates that sensory and motor cortical areas play a significant role in the neural representation of concepts. However, little is known about the overall architecture of this representational system, including the role played by higher level areas that integrate different types of sensory and motor information. The present study addressed this issue by investigating the simultaneous contributions of multiple sensory-motor modalities to semantic word processing. With a multivariate fMRI design, we examined activation associated with 5 sensory-motor attributes--color, shape, visual motion, sound, and manipulation--for 900 words. Regions responsive to each attribute were identified using independent ratings of the attributes' relevance to the meaning of each word. The results indicate that these aspects of conceptual knowledge are encoded in multimodal and higher level unimodal areas involved in processing the corresponding types of information during perception and action, in agreement with embodied theories of semantics. They also reveal a hierarchical system of abstracted sensory-motor representations incorporating a major division between object interaction and object perception processes.
Collapse
Affiliation(s)
| | | | - Rutvik H Desai
- Department of Psychology, University of South Carolina, Columbia, SC, USA
| | | | | | - William L Gross
- Department of Anesthesiology, Medical College of Wisconsin, Milwaukee, WI, USA
| | | | | |
Collapse
|
98
|
Golden HL, Agustus JL, Goll JC, Downey LE, Mummery CJ, Schott JM, Crutch SJ, Warren JD. Functional neuroanatomy of auditory scene analysis in Alzheimer's disease. Neuroimage Clin 2015; 7:699-708. [PMID: 26029629 PMCID: PMC4446369 DOI: 10.1016/j.nicl.2015.02.019] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Revised: 01/16/2015] [Accepted: 02/24/2015] [Indexed: 11/28/2022]
Abstract
Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known 'cocktail party effect' as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory 'foreground' and 'background'. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology.
Collapse
Affiliation(s)
- Hannah L Golden
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Jennifer L Agustus
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Johanna C Goll
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Laura E Downey
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Catherine J Mummery
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Jonathan M Schott
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Sebastian J Crutch
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Jason D Warren
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| |
Collapse
|
99
|
Spagna A, Mackie MA, Fan J. Supramodal executive control of attention. Front Psychol 2015; 6:65. [PMID: 25759674 PMCID: PMC4338659 DOI: 10.3389/fpsyg.2015.00065] [Citation(s) in RCA: 57] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2014] [Accepted: 01/13/2015] [Indexed: 11/13/2022] Open
Abstract
The human attentional system can be subdivided into three functional networks of alerting, orienting, and executive control. Although these networks have been extensively studied in the visuospatial modality, whether the same mechanisms are deployed across different sensory modalities remains unclear. In this study we used the attention network test for the visuospatial modality, in addition to two auditory variants with spatial and frequency manipulations to examine cross-modal correlations between network functions. Results showed that among the visual and auditory tasks, the effects of executive control, but not effects of alerting and orienting, were significantly correlated. These findings suggest that while alerting and orienting functions rely more upon modality-specific processes, the executive control of attention coordinates complex behavior via supramodal mechanisms.
Collapse
Affiliation(s)
- Alfredo Spagna
- Department of Psychology, Queens College, City University of New York, New York, NY USA
| | - Melissa-Ann Mackie
- Department of Psychology, Queens College, City University of New York, New York, NY USA ; The Graduate Center, City University of New York, New York, NY USA
| | - Jin Fan
- Department of Psychology, Queens College, City University of New York, New York, NY USA ; The Graduate Center, City University of New York, New York, NY USA ; Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY USA ; Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY USA
| |
Collapse
|
100
|
Golden HL, Nicholas JM, Yong KXX, Downey LE, Schott JM, Mummery CJ, Crutch SJ, Warren JD. Auditory spatial processing in Alzheimer's disease. Brain 2015; 138:189-202. [PMID: 25468732 PMCID: PMC4285196 DOI: 10.1093/brain/awu337] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2014] [Revised: 10/01/2014] [Accepted: 10/10/2014] [Indexed: 11/13/2022] Open
Abstract
The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer's disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer's disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer's disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer's disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer's disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer's disease syndromic spectrum.
Collapse
Affiliation(s)
- Hannah L Golden
- 1 Dementia Research Centre, UCL Institute of Neurology, University College London, London, WC1N 3BG, UK
| | - Jennifer M Nicholas
- 1 Dementia Research Centre, UCL Institute of Neurology, University College London, London, WC1N 3BG, UK 2 Department of Medical Statistics, London School of Hygiene and Tropical Medicine, London, WC1E 7HT, UK
| | - Keir X X Yong
- 1 Dementia Research Centre, UCL Institute of Neurology, University College London, London, WC1N 3BG, UK
| | - Laura E Downey
- 1 Dementia Research Centre, UCL Institute of Neurology, University College London, London, WC1N 3BG, UK
| | - Jonathan M Schott
- 1 Dementia Research Centre, UCL Institute of Neurology, University College London, London, WC1N 3BG, UK
| | - Catherine J Mummery
- 1 Dementia Research Centre, UCL Institute of Neurology, University College London, London, WC1N 3BG, UK
| | - Sebastian J Crutch
- 1 Dementia Research Centre, UCL Institute of Neurology, University College London, London, WC1N 3BG, UK
| | - Jason D Warren
- 1 Dementia Research Centre, UCL Institute of Neurology, University College London, London, WC1N 3BG, UK
| |
Collapse
|