1
|
Baths V, Jartarkar M, Sood S, Lewis AG, Ostarek M, Huettig F. Testing the involvement of low-level visual representations during spoken word processing with non-Western students and meditators practicing Sudarshan Kriya Yoga. Brain Res 2024:148993. [PMID: 38729334 DOI: 10.1016/j.brainres.2024.148993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 05/06/2024] [Accepted: 05/07/2024] [Indexed: 05/12/2024]
Abstract
Previous studies, using the Continuous Flash Suppression (CFS) paradigm, observed that (Western) university students are better able to detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Here we attempted to replicate this effect with non-Western university students in Goa (India). A second aim was to explore the performance of (non-Western) meditators practicing Sudarshan Kriya Yoga in Goa in the same task. Some previous literature suggests that meditators may excel in some tasks that tap visual attention, for example by exercising better endogenous and exogenous control of visual awareness than non-meditators. The present study replicated the finding that congruent spoken cue words lead to significantly higher detection sensitivity than incongruent cue words in non-Western university students. Our exploratory meditator group also showed this detection effect but both frequentist and Bayesian analyses suggest that the practice of meditation did not modulate it. Overall, our results provide further support for the notion that spoken words can activate low-level category-specific visual features that boost the basic capacity to detect the presence of a visual stimulus that has those features. Further research is required to conclusively test whether meditation can modulate visual detection abilities in CFS and similar tasks.
Collapse
Affiliation(s)
- Veeky Baths
- Cognitive Neuroscience Lab, BITS Pilani, K K Birla Goa Campus, Goa, India.
| | - Mayur Jartarkar
- Cognitive Neuroscience Lab, BITS Pilani, K K Birla Goa Campus, Goa, India
| | - Shagun Sood
- Cognitive Neuroscience Lab, BITS Pilani, K K Birla Goa Campus, Goa, India
| | - Ashley G Lewis
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Markus Ostarek
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Falk Huettig
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; University of Kaiserslautern-Landau, Center for Cognitive Science, Kaiserslautern, Germany; University of Lisbon, Faculty of Psychology, Lisbon, Portugal
| |
Collapse
|
2
|
Giglio L, Ostarek M, Sharoh D, Hagoort P. Diverging neural dynamics for syntactic structure building in naturalistic speaking and listening. Proc Natl Acad Sci U S A 2024; 121:e2310766121. [PMID: 38442171 PMCID: PMC10945772 DOI: 10.1073/pnas.2310766121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 01/31/2024] [Indexed: 03/07/2024] Open
Abstract
The neural correlates of sentence production are typically studied using task paradigms that differ considerably from the experience of speaking outside of an experimental setting. In this fMRI study, we aimed to gain a better understanding of syntactic processing in spontaneous production versus naturalistic comprehension in three regions of interest (BA44, BA45, and left posterior middle temporal gyrus). A group of participants (n = 16) was asked to speak about the events of an episode of a TV series in the scanner. Another group of participants (n = 36) listened to the spoken recall of a participant from the first group. To model syntactic processing, we extracted word-by-word metrics of phrase-structure building with a top-down and a bottom-up parser that make different hypotheses about the timing of structure building. While the top-down parser anticipates syntactic structure, sometimes before it is obvious to the listener, the bottom-up parser builds syntactic structure in an integratory way after all of the evidence has been presented. In comprehension, neural activity was found to be better modeled by the bottom-up parser, while in production, it was better modeled by the top-down parser. We additionally modeled structure building in production with two strategies that were developed here to make different predictions about the incrementality of structure building during speaking. We found evidence for highly incremental and anticipatory structure building in production, which was confirmed by a converging analysis of the pausing patterns in speech. Overall, this study shows the feasibility of studying the neural dynamics of spontaneous language production.
Collapse
Affiliation(s)
- Laura Giglio
- Max Planck Institute for Psycholinguistics, Nijmegen6525XD, The Netherlands
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen6525EN, The Netherlands
| | - Markus Ostarek
- Max Planck Institute for Psycholinguistics, Nijmegen6525XD, The Netherlands
| | - Daniel Sharoh
- Max Planck Institute for Psycholinguistics, Nijmegen6525XD, The Netherlands
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen6525EN, The Netherlands
| | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen6525XD, The Netherlands
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen6525EN, The Netherlands
| |
Collapse
|
3
|
Montero-Melis G, van Paridon J, Ostarek M, Bylund E. No evidence for embodiment: The motor system is not needed to keep action verbs in working memory. Cortex 2022; 150:108-125. [DOI: 10.1016/j.cortex.2022.02.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 02/27/2022] [Accepted: 02/28/2022] [Indexed: 01/21/2023]
|
4
|
Giglio L, Ostarek M, Weber K, Hagoort P. Commonalities and Asymmetries in the Neurobiological Infrastructure for Language Production and Comprehension. Cereb Cortex 2021; 32:1405-1418. [PMID: 34491301 PMCID: PMC8971077 DOI: 10.1093/cercor/bhab287] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 07/19/2021] [Accepted: 07/20/2021] [Indexed: 01/30/2023] Open
Abstract
The neurobiology of sentence production has been largely understudied compared to the neurobiology of sentence comprehension, due to difficulties with experimental control and motion-related artifacts in neuroimaging. We studied the neural response to constituents of increasing size and specifically focused on the similarities and differences in the production and comprehension of the same stimuli. Participants had to either produce or listen to stimuli in a gradient of constituent size based on a visual prompt. Larger constituent sizes engaged the left inferior frontal gyrus (LIFG) and middle temporal gyrus (LMTG) extending to inferior parietal areas in both production and comprehension, confirming that the neural resources for syntactic encoding and decoding are largely overlapping. An ROI analysis in LIFG and LMTG also showed that production elicited larger responses to constituent size than comprehension and that the LMTG was more engaged in comprehension than production, while the LIFG was more engaged in production than comprehension. Finally, increasing constituent size was characterized by later BOLD peaks in comprehension but earlier peaks in production. These results show that syntactic encoding and parsing engage overlapping areas, but there are asymmetries in the engagement of the language network due to the specific requirements of production and comprehension.
Collapse
Affiliation(s)
- Laura Giglio
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands.,Donders Institute for Cognition, Brain and Behaviour, Radboud University, 6525 AJ Nijmegen, The Netherlands
| | - Markus Ostarek
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands.,Donders Institute for Cognition, Brain and Behaviour, Radboud University, 6525 AJ Nijmegen, The Netherlands
| | - Kirsten Weber
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands.,Donders Institute for Cognition, Brain and Behaviour, Radboud University, 6525 AJ Nijmegen, The Netherlands
| | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands.,Donders Institute for Cognition, Brain and Behaviour, Radboud University, 6525 AJ Nijmegen, The Netherlands
| |
Collapse
|
5
|
Abstract
What are the mental processes that allow us to understand the meaning of words? A large body of evidence suggests that when we process speech, we engage a process of perceptual simulation whereby sensorimotor states are activated as a source of semantic information. But does the same process take place when words are expressed with the hands and perceived through the eyes? To date, it is not known whether perceptual simulation is also observed in sign languages, the manual-visual languages of deaf communities. Continuous flash suppression is a method that addresses this question by measuring the effect of language on detection sensitivity to images that are suppressed from awareness. In spoken languages, it has been reported that listening to a word (e.g., "bottle") activates visual features of an object (e.g., the shape of a bottle), and this in turn facilitates image detection. An interesting but untested question is whether the same process takes place when deaf signers see signs. We found that processing signs boosted the detection of congruent images, making otherwise invisible pictures visible. A boost of visual processing was observed only for signers but not for hearing nonsigners, suggesting that the penetration of the visual system through signs requires a fully fledged manual language. Iconicity did not modulate the effect of signs on detection, neither in signers nor in hearing nonsigners. This suggests that visual simulation during language processing occurs regardless of language modality (sign vs. speech) or iconicity, pointing to a foundational role of simulation for language comprehension. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
|
6
|
van Paridon J, Ostarek M, Arunkumar M, Huettig F. Does Neuronal Recycling Result in Destructive Competition? The Influence of Learning to Read on the Recognition of Faces. Psychol Sci 2021; 32:459-465. [PMID: 33631074 DOI: 10.1177/0956797620971652] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Written language, a human cultural invention, is far too recent a development for dedicated neural infrastructure to have evolved in its service. Newly acquired cultural skills, such as reading, thus recycle evolutionarily older circuits that originally evolved for different, but similar, functions (e.g., visual object recognition). The destructive-competition hypothesis predicts that this neuronal recycling has detrimental behavioral effects on the cognitive functions for which a cortical network originally evolved. In a study with 97 literate, low-literate, and illiterate participants from the same socioeconomic background, we found that even after adjusting for cognitive ability and test-taking familiarity, learning to read was associated with an increase, rather than a decrease, in object-recognition abilities. These results are incompatible with the claim that neuronal recycling results in destructive competition and are consistent with the possibility that learning to read instead fine-tunes general object-recognition mechanisms, a hypothesis that needs further neuroscientific investigation.
Collapse
Affiliation(s)
- Jeroen van Paridon
- Psychology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Markus Ostarek
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Mrudula Arunkumar
- Psychology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Falk Huettig
- Psychology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.,Centre for Language Studies, Radboud University
| |
Collapse
|
7
|
Montero-Melis G, Isaksson P, van Paridon J, Ostarek M. Does using a foreign language reduce mental imagery? Cognition 2020; 196:104134. [DOI: 10.1016/j.cognition.2019.104134] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2019] [Revised: 11/06/2019] [Accepted: 11/10/2019] [Indexed: 11/24/2022]
|
8
|
Ostarek M, Ishag A, Joosen D, Huettig F. Saccade trajectories reveal dynamic interactions of semantic and spatial information during the processing of implicitly spatial words. ACTA ACUST UNITED AC 2018; 44:1658-1670. [DOI: 10.1037/xlm0000536] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
9
|
Ostarek M, Joosen D, Ishag A, de Nijs M, Huettig F. Are visual processes causally involved in "perceptual simulation" effects in the sentence-picture verification task? Cognition 2018; 182:84-94. [PMID: 30219635 DOI: 10.1016/j.cognition.2018.08.017] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2017] [Revised: 08/27/2018] [Accepted: 08/27/2018] [Indexed: 10/28/2022]
Abstract
Many studies have shown that sentences implying an object to have a certain shape produce a robust reaction time advantage for shape-matching pictures in the sentence-picture verification task. Typically, this finding has been interpreted as evidence for perceptual simulation, i.e., that access to implicit shape information involves the activation of modality-specific visual processes. It follows from this proposal that disrupting visual processing during sentence comprehension should interfere with perceptual simulation and obliterate the match effect. Here we directly test this hypothesis. Participants listened to sentences while seeing either visual noise that was previously shown to strongly interfere with basic visual processing or a blank screen. Experiments 1 and 2 replicated the match effect but crucially visual noise did not modulate it. When an interference technique was used that targeted high-level semantic processing (Experiment 3) however the match effect vanished. Visual noise specifically targeting high-level visual processes (Experiment 4) only had a minimal effect on the match effect. We conclude that the shape match effect in the sentence-picture verification paradigm is unlikely to rely on perceptual simulation.
Collapse
Affiliation(s)
- Markus Ostarek
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; International Max Planck Research School for Language Sciences, The Netherlands.
| | - Dennis Joosen
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Adil Ishag
- International University of Africa, Khartoum, Sudan
| | - Monique de Nijs
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Falk Huettig
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| |
Collapse
|
10
|
Popov V, Ostarek M, Tenison C. Practices and pitfalls in inferring neural representations. Neuroimage 2018; 174:340-351. [PMID: 29578030 DOI: 10.1016/j.neuroimage.2018.03.041] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Revised: 03/16/2018] [Accepted: 03/18/2018] [Indexed: 10/17/2022] Open
Abstract
A key challenge for cognitive neuroscience is deciphering the representational schemes of the brain. Stimulus-feature-based encoding models are becoming increasingly popular for inferring the dimensions of neural representational spaces from stimulus-feature spaces. We argue that such inferences are not always valid because successful prediction can occur even if the two representational spaces use different, but correlated, representational schemes. We support this claim with three simulations in which we achieved high prediction accuracy despite systematic differences in the geometries and dimensions of the underlying representations. Detailed analysis of the encoding models' predictions showed systematic deviations from ground-truth, indicating that high prediction accuracy is insufficient for making representational inferences. This fallacy applies to the prediction of actual neural patterns from stimulus-feature spaces and we urge caution in inferring the nature of the neural code from such methods. We discuss ways to overcome these inferential limitations, including model comparison, absolute model performance, visualization techniques and attentional modulation.
Collapse
Affiliation(s)
- Vencislav Popov
- Department of Psychology, Carnegie Mellon University, 5000 Forbes Avenue, Baker Hall, 15289, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, 4400 Fifth Ave, 15213, Pittsburgh, PA, USA.
| | - Markus Ostarek
- Max Planck Institute for Psycholinguistics, PO Box 310, 6500, AH Nijmegen, The Netherlands
| | - Caitlin Tenison
- Department of Psychology, Carnegie Mellon University, 5000 Forbes Avenue, Baker Hall, 15289, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, 4400 Fifth Ave, 15213, Pittsburgh, PA, USA
| |
Collapse
|
11
|
Abstract
It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation. (PsycINFO Database Record
Collapse
|
12
|
Ostarek M, Huettig F. Spoken words can make the invisible visible-Testing the involvement of low-level visual representations in spoken word processing. J Exp Psychol Hum Percept Perform 2017; 43:499-508. [PMID: 28080110 DOI: 10.1037/xhp0000313] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent ("bottle" → picture of a bottle) versus incongruent ("bottle" → picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400 ms after word onset and decays at 600 ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, that is, what we see. More generally, our findings fit best with the notion that spoken words activate modality-specific visual representations that are low level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before. (PsycINFO Database Record
Collapse
|
13
|
Abstract
Previous research has shown that processing words with an up/down association (e.g., bird, foot) can influence the subsequent identification of visual targets in congruent location (at the top/bottom of the screen). However, as facilitation and interference were found under similar conditions, the nature of the underlying mechanisms remained unclear. We propose that word comprehension relies on the perceptual simulation of a prototypical event involving the entity denoted by a word in order to provide a general account of the different findings. In 3 experiments, participants had to discriminate between 2 target pictures appearing at the top or the bottom of the screen by pressing the left versus right button. Immediately before the targets appeared, they saw an up/down word belonging to the target's event, an up/down word unrelated to the target, or a spatially neutral control word. Prime words belonging to target event facilitated identification of targets at a stimulus onset asynchrony (SOA) of 250 ms (Experiment 1), but only when presented in the vertical location where they are typically seen, indicating that targets were integrated in the simulations activated by the prime words. Moreover, at the same SOA, there was a robust facilitation effect for targets appearing in their typical location regardless of the prime type. However, when words were presented for 100 ms (Experiment 2) or 800 ms (Experiment 3), only a location nonspecific priming effect was found, suggesting that the visual system was not activated. Implications for theories of semantic processing are discussed. (PsycINFO Database Record
Collapse
Affiliation(s)
- Markus Ostarek
- Experimental Psychology Department, Division of Psychology and Language Sciences, University College London
| | - Gabriella Vigliocco
- Experimental Psychology Department, Division of Psychology and Language Sciences, University College London
| |
Collapse
|
14
|
Meekings S, Boebinger D, Evans S, Lima CF, Chen S, Ostarek M, Scott SK. Do We Know What We're Saying? The Roles of Attention and Sensory Information During Speech Production. Psychol Sci 2015; 26:1975-7. [PMID: 26464309 DOI: 10.1177/0956797614563766] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2014] [Accepted: 11/20/2014] [Indexed: 11/16/2022] Open
Affiliation(s)
- Sophie Meekings
- Institute of Cognitive Neuroscience, University College London
| | - Dana Boebinger
- Institute of Cognitive Neuroscience, University College London
| | - Samuel Evans
- Institute of Cognitive Neuroscience, University College London
| | - César F Lima
- Institute of Cognitive Neuroscience, University College London
| | - Sinead Chen
- Institute of Cognitive Neuroscience, University College London
| | - Markus Ostarek
- Institute of Cognitive Neuroscience, University College London
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London
| |
Collapse
|
15
|
Lima CF, Lavan N, Evans S, Agnew Z, Halpern AR, Shanmugalingam P, Meekings S, Boebinger D, Ostarek M, McGettigan C, Warren JE, Scott SK. Feel the Noise: Relating Individual Differences in Auditory Imagery to the Structure and Function of Sensorimotor Systems. Cereb Cortex 2015; 25:4638-50. [PMID: 26092220 PMCID: PMC4816805 DOI: 10.1093/cercor/bhv134] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Humans can generate mental auditory images of voices or songs, sometimes perceiving them almost as vividly as perceptual experiences. The functional networks supporting auditory imagery have been described, but less is known about the systems associated with interindividual differences in auditory imagery. Combining voxel-based morphometry and fMRI, we examined the structural basis of interindividual differences in how auditory images are subjectively perceived, and explored associations between auditory imagery, sensory-based processing, and visual imagery. Vividness of auditory imagery correlated with gray matter volume in the supplementary motor area (SMA), parietal cortex, medial superior frontal gyrus, and middle frontal gyrus. An analysis of functional responses to different types of human vocalizations revealed that the SMA and parietal sites that predict imagery are also modulated by sound type. Using representational similarity analysis, we found that higher representational specificity of heard sounds in SMA predicts vividness of imagery, indicating a mechanistic link between sensory- and imagery-based processing in sensorimotor cortex. Vividness of imagery in the visual domain also correlated with SMA structure, and with auditory imagery scores. Altogether, these findings provide evidence for a signature of imagery in brain structure, and highlight a common role of perceptual–motor interactions for processing heard and internally generated auditory information.
Collapse
Affiliation(s)
- César F Lima
- Institute of Cognitive Neuroscience Center for Psychology, University of Porto, Porto, Portugal
| | - Nadine Lavan
- Institute of Cognitive Neuroscience Department of Psychology, Royal Holloway University of London, London, UK
| | | | - Zarinah Agnew
- Institute of Cognitive Neuroscience Department of Otolaryngology, University of California, San Francisco, USA
| | | | | | | | | | | | - Carolyn McGettigan
- Institute of Cognitive Neuroscience Department of Psychology, Royal Holloway University of London, London, UK
| | - Jane E Warren
- Faculty of Brain Sciences, University College London, London, UK
| | | |
Collapse
|