51
|
Roux-Sibilon A, Trouilloud A, Kauffmann L, Guyader N, Mermillod M, Peyrin C. Influence of peripheral vision on object categorization in central vision. J Vis 2019; 19:7. [DOI: 10.1167/19.14.7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Alexia Roux-Sibilon
- University Grenoble Alpes, University of Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Audrey Trouilloud
- University Grenoble Alpes, University of Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Louise Kauffmann
- University Grenoble Alpes, University of Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
- University Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Nathalie Guyader
- University Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France
| | - Martial Mermillod
- University Grenoble Alpes, University of Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| | - Carole Peyrin
- University Grenoble Alpes, University of Savoie Mont Blanc, CNRS, LPNC, Grenoble, France
| |
Collapse
|
52
|
Cermeño-Aínsa S. The cognitive penetrability of perception: A blocked debate and a tentative solution. Conscious Cogn 2019; 77:102838. [PMID: 31678779 DOI: 10.1016/j.concog.2019.102838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 10/03/2019] [Accepted: 10/12/2019] [Indexed: 11/16/2022]
Abstract
Despite the extensive body of psychological findings suggesting that cognition influences perception, the debate between defenders and detractors of the cognitive penetrability of perception persists. While detractors demand more strictness in psychological experiments, proponents consider that empirical studies show that cognitive penetrability occurs. These considerations have led some theorists to propose that the debate has reached a dead end. The issue about where perception ends and cognition begins is, I argue, one of the reasons why the debate is cornered. Another reason is the inability of psychological studies to present uncontroversial interpretations of the results obtained. To dive into other kinds of empirical sources is, therefore, required to clarify the debate. In this paper, I explain where the debate is blocked, and suggest that neuroscientific evidence together with the predictive coding account, might decant the discussion on the side of the penetrability thesis.
Collapse
Affiliation(s)
- Sergio Cermeño-Aínsa
- Departamento de Filosofía, Facultad de Filosofía y Letras, 08193 Cerdanyola del Vallés, Spain.
| |
Collapse
|
53
|
Scene Representations Conveyed by Cortical Feedback to Early Visual Cortex Can Be Described by Line Drawings. J Neurosci 2019; 39:9410-9423. [PMID: 31611306 PMCID: PMC6867807 DOI: 10.1523/jneurosci.0852-19.2019] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Revised: 08/27/2019] [Accepted: 09/23/2019] [Indexed: 11/25/2022] Open
Abstract
Human behavior is dependent on the ability of neuronal circuits to predict the outside world. Neuronal circuits in early visual areas make these predictions based on internal models that are delivered via non-feedforward connections. Despite our extensive knowledge of the feedforward sensory features that drive cortical neurons, we have a limited grasp on the structure of the brain's internal models. Progress in neuroscience therefore depends on our ability to replicate the models that the brain creates internally. Here we record human fMRI data while presenting partially occluded visual scenes. Visual occlusion allows us to experimentally control sensory input to subregions of visual cortex while internal models continue to influence activity in these regions. Because the observed activity is dependent on internal models, but not on sensory input, we have the opportunity to map visual features conveyed by the brain's internal models. Our results show that activity related to internal models in early visual cortex are more related to scene-specific features than to categorical or depth features. We further demonstrate that behavioral line drawings provide a good description of internal model structure representing scene-specific features. These findings extend our understanding of internal models, showing that line drawings provide a window into our brains' internal models of vision. SIGNIFICANCE STATEMENT We find that fMRI activity patterns corresponding to occluded visual information in early visual cortex fill in scene-specific features. Line drawings of the missing scene information correlate with our recorded activity patterns, and thus to internal models. Despite our extensive knowledge of the sensory features that drive cortical neurons, we have a limited grasp on the structure of our brains' internal models. These results therefore constitute an advance to the field of neuroscience by extending our knowledge about the models that our brains construct to efficiently represent and predict the world. Moreover, they link a behavioral measure to these internal models, which play an active role in many components of human behavior, including visual predictions, action planning, and decision making.
Collapse
|
54
|
Towards a Unified View on Pathways and Functions of Neural Recurrent Processing. Trends Neurosci 2019; 42:589-603. [PMID: 31399289 DOI: 10.1016/j.tins.2019.07.005] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 06/21/2019] [Accepted: 07/11/2019] [Indexed: 11/20/2022]
Abstract
There are three neural feedback pathways to the primary visual cortex (V1): corticocortical, pulvinocortical, and cholinergic. What are the respective functions of these three projections? Possible functions range from contextual modulation of stimulus processing and feedback of high-level information to predictive processing (PP). How are these functions subserved by different pathways and can they be integrated into an overarching theoretical framework? We propose that corticocortical and pulvinocortical connections are involved in all three functions, whereas the role of cholinergic projections is limited by their slow response to stimuli. PP provides a broad explanatory framework under which stimulus-context modulation and high-level processing are subsumed, involving multiple feedback pathways that provide mechanisms for inferring and interpreting what sensory inputs are about.
Collapse
|
55
|
Demarchi G, Sanchez G, Weisz N. Automatic and feature-specific prediction-related neural activity in the human auditory system. Nat Commun 2019; 10:3440. [PMID: 31371713 PMCID: PMC6672009 DOI: 10.1038/s41467-019-11440-1] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 07/11/2019] [Indexed: 12/04/2022] Open
Abstract
Prior experience enables the formation of expectations of upcoming sensory events. However, in the auditory modality, it is not known whether prediction-related neural signals carry feature-specific information. Here, using magnetoencephalography (MEG), we examined whether predictions of future auditory stimuli carry tonotopic specific information. Participants passively listened to sound sequences of four carrier frequencies (tones) with a fixed presentation rate, ensuring strong temporal expectations of when the next stimulus would occur. Expectation of which frequency would occur was parametrically modulated across the sequences, and sounds were occasionally omitted. We show that increasing the regularity of the sequence boosts carrier-frequency-specific neural activity patterns during both the anticipatory and omission periods, indicating that prediction-related neural activity is indeed feature-specific. Our results illustrate that even without bottom-up input, auditory predictions can activate tonotopically specific templates. After listening to a predictable sequence of sounds, we can anticipate and predict the next sound in the sequence. Here, the authors show that during expectation of a sound, the brain generates neural activity matching that which is produced by actually hearing the same sound.
Collapse
Affiliation(s)
- Gianpaolo Demarchi
- Centre for Cognitive Neuroscience and Division of Physiological Psychology, University of Salzburg, Hellbrunnerstraße 34, 5020, Salzburg, Austria.
| | - Gaëtan Sanchez
- Centre for Cognitive Neuroscience and Division of Physiological Psychology, University of Salzburg, Hellbrunnerstraße 34, 5020, Salzburg, Austria.,Lyon Neuroscience Research Center, Brain Dynamics and Cognition Team, INSERM UMRS 1028, CNRS UMR 5292, Université Claude Bernard Lyon 1, Université de Lyon, F-69000, Lyon, France
| | - Nathan Weisz
- Centre for Cognitive Neuroscience and Division of Physiological Psychology, University of Salzburg, Hellbrunnerstraße 34, 5020, Salzburg, Austria
| |
Collapse
|
56
|
|
57
|
Self MW, van Kerkoerle T, Goebel R, Roelfsema PR. Benchmarking laminar fMRI: Neuronal spiking and synaptic activity during top-down and bottom-up processing in the different layers of cortex. Neuroimage 2019. [DOI: 10.1016/j.neuroimage.2017.06.045] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
|
58
|
Modelling face memory reveals task-generalizable representations. Nat Hum Behav 2019; 3:817-826. [PMID: 31209368 DOI: 10.1038/s41562-019-0625-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Accepted: 05/02/2019] [Indexed: 11/08/2022]
Abstract
Current cognitive theories are cast in terms of information-processing mechanisms that use mental representations1-4. For example, people use their mental representations to identify familiar faces under various conditions of pose, illumination and ageing, or to draw resemblance between family members. Yet, the actual information contents of these representations are rarely characterized, which hinders knowledge of the mechanisms that use them. Here, we modelled the three-dimensional representational contents of 4 faces that were familiar to 14 participants as work colleagues. The representational contents were created by reverse-correlating identity information generated on each trial with judgements of the face's similarity to the individual participant's memory of this face. In a second study, testing new participants, we demonstrated the validity of the modelled contents using everyday face tasks that generalize identity judgements to new viewpoints, age and sex. Our work highlights that such models of mental representations are critical to understanding generalization behaviour and its underlying information-processing mechanisms.
Collapse
|
59
|
Stefanics G, Stephan KE, Heinzle J. Feature-specific prediction errors for visual mismatch. Neuroimage 2019; 196:142-151. [PMID: 30978499 DOI: 10.1016/j.neuroimage.2019.04.020] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Revised: 03/30/2019] [Accepted: 04/04/2019] [Indexed: 01/08/2023] Open
Abstract
Predictive coding (PC) theory posits that our brain employs a predictive model of the environment to infer the causes of its sensory inputs. A fundamental but untested prediction of this theory is that the same stimulus should elicit distinct precision weighted prediction errors (pwPEs) when different (feature-specific) predictions are violated, even in the absence of attention. Here, we tested this hypothesis using functional magnetic resonance imaging (fMRI) and a multi-feature roving visual mismatch paradigm where rare changes in either color (red, green), or emotional expression (happy, fearful) of faces elicited pwPE responses in human participants. Using a computational model of learning and inference, we simulated pwPE and prediction trajectories of a Bayes-optimal observer and used these to analyze changes in blood oxygen level dependent (BOLD) responses to changes in color and emotional expression of faces while participants engaged in a distractor task. Controlling for visual attention by eye-tracking, we found pwPE responses to unexpected color changes in the fusiform gyrus. Conversely, unexpected changes of facial emotions elicited pwPE responses in cortico-thalamo-cerebellar structures associated with emotion and theory of mind processing. Predictions pertaining to emotions activated fusiform, occipital and temporal areas. Our results are consistent with a general role of PC across perception, from low-level to complex and socially relevant object features, and suggest that monitoring of the social environment occurs continuously and automatically, even in the absence of attention.
Collapse
Affiliation(s)
- Gabor Stefanics
- Translational Neuromodeling Unit (TNU), Institute for Biomedical Engineering, University of Zurich & ETH Zurich, Wilfriedstrasse 6, 8032, Zurich, Switzerland; Laboratory for Social and Neural Systems Research, Department of Economics, University of Zurich, Blümlisalpstrasse 10, 8006, Zurich, Switzerland.
| | - Klaas Enno Stephan
- Translational Neuromodeling Unit (TNU), Institute for Biomedical Engineering, University of Zurich & ETH Zurich, Wilfriedstrasse 6, 8032, Zurich, Switzerland; Laboratory for Social and Neural Systems Research, Department of Economics, University of Zurich, Blümlisalpstrasse 10, 8006, Zurich, Switzerland; Max Planck Institute for Metabolism Research, Cologne, Germany
| | - Jakob Heinzle
- Translational Neuromodeling Unit (TNU), Institute for Biomedical Engineering, University of Zurich & ETH Zurich, Wilfriedstrasse 6, 8032, Zurich, Switzerland
| |
Collapse
|
60
|
Smith FW, Smith ML. Decoding the dynamic representation of facial expressions of emotion in explicit and incidental tasks. Neuroimage 2019; 195:261-271. [PMID: 30940611 DOI: 10.1016/j.neuroimage.2019.03.065] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 03/18/2019] [Accepted: 03/27/2019] [Indexed: 11/24/2022] Open
Abstract
Faces transmit a wealth of important social signals. While previous studies have elucidated the network of cortical regions important for perception of facial expression, and the associated temporal components such as the P100, N170 and EPN, it is still unclear how task constraints may shape the representation of facial expression (or other face categories) in these networks. In the present experiment, we used Multivariate Pattern Analysis (MVPA) with EEG to investigate the neural information available across time about two important face categories (expression and identity) when those categories are either perceived under explicit (e.g. decoding facial expression category from the EEG when task is on expression) or incidental task contexts (e.g. decoding facial expression category from the EEG when task is on identity). Decoding of both face categories, across both task contexts, peaked in time-windows spanning 91-170 ms (across posterior electrodes). Peak decoding of expression, however, was not affected by task context whereas peak decoding of identity was significantly reduced under incidental processing conditions. In addition, errors in EEG decoding correlated with errors in behavioral categorization under explicit processing for both expression and identity, however under incidental conditions only errors in EEG decoding of expression correlated with behavior. Furthermore, decoding time-courses and the spatial pattern of informative electrodes showed consistently better decoding of identity under explicit conditions at later-time periods, with weak evidence for similar effects for decoding of expression at isolated time-windows. Taken together, these results reveal differences and commonalities in the processing of face categories under explicit Vs incidental task contexts and suggest that facial expressions are processed to a richer degree under incidental processing conditions, consistent with prior work indicating the relative automaticity by which emotion is processed. Our work further demonstrates the utility in applying multivariate decoding analyses to EEG for revealing the dynamics of face perception.
Collapse
Affiliation(s)
- Fraser W Smith
- School of Psychology, University of East Anglia, Norwich, UK.
| | - Marie L Smith
- School of Psychological Sciences, Birkbeck College, University of London, London, UK
| |
Collapse
|
61
|
Thielen J, Bosch SE, van Leeuwen TM, van Gerven MAJ, van Lier R. Neuroimaging Findings on Amodal Completion: A Review. Iperception 2019; 10:2041669519840047. [PMID: 31007887 PMCID: PMC6457032 DOI: 10.1177/2041669519840047] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Accepted: 02/20/2019] [Indexed: 12/03/2022] Open
Abstract
Amodal completion is the phenomenon of perceiving completed objects even though physically they are partially occluded. In this review, we provide an extensive overview of the results obtained from a variety of neuroimaging studies on the neural correlates of amodal completion. We discuss whether low-level and high-level cortical areas are implicated in amodal completion; provide an overview of how amodal completion unfolds over time while dissociating feedforward, recurrent, and feedback processes; and discuss how amodal completion is represented at the neuronal level. The involvement of low-level visual areas such as V1 and V2 is not yet clear, while several high-level structures such as the lateral occipital complex and fusiform face area seem invariant to occlusion of objects and faces, respectively, and several motor areas seem to code for object permanence. The variety of results on the timing of amodal completion hints to a mixture of feedforward, recurrent, and feedback processes. We discuss whether the invisible parts of the occluded object are represented as if they were visible, contrary to a high-level representation. While plenty of questions on amodal completion remain, this review presents an overview of the neuroimaging findings reported to date, summarizes several insights from computational models, and connects research of other perceptual completion processes such as modal completion. In all, it is suggested that amodal completion is the solution to deal with various types of incomplete retinal information, and highly depends on stimulus complexity and saliency, and therefore also give rise to a variety of observed neural patterns.
Collapse
Affiliation(s)
- Jordy Thielen
- Radboud University, Donders Institute for Brain,
Cognition and Behaviour, Nijmegen, the Netherlands
| | - Sander E. Bosch
- Radboud University, Donders Institute for Brain,
Cognition and Behaviour, Nijmegen, the Netherlands
| | - Tessa M. van Leeuwen
- Radboud University, Donders Institute for Brain,
Cognition and Behaviour, Nijmegen, the Netherlands
| | - Marcel A. J. van Gerven
- Radboud University, Donders Institute for Brain,
Cognition and Behaviour, Nijmegen, the Netherlands
| | - Rob van Lier
- Radboud University, Donders Institute for Brain,
Cognition and Behaviour, Nijmegen, the Netherlands
| |
Collapse
|
62
|
Cortical activation associated with motor preparation can be used to predict the freely chosen effector of an upcoming movement and reflects response time: An fMRI decoding study. Neuroimage 2018; 183:584-596. [DOI: 10.1016/j.neuroimage.2018.08.060] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Revised: 08/09/2018] [Accepted: 08/25/2018] [Indexed: 11/19/2022] Open
|
63
|
Sleep Strengthens Predictive Sequence Coding. J Neurosci 2018; 38:8989-9000. [PMID: 30185464 DOI: 10.1523/jneurosci.1352-18.2018] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2018] [Revised: 08/23/2018] [Accepted: 08/24/2018] [Indexed: 02/05/2023] Open
Abstract
Predictive-coding theories assume that perception and action are based on internal models derived from previous experience. Such internal models require selection and consolidation to be stored over time. Sleep is known to support memory consolidation. We hypothesized that sleep supports both consolidation and abstraction of an internal task model that is subsequently used to predict upcoming stimuli. Human subjects (of either sex) were trained on deterministic visual sequences and tested with interleaved deviant stimuli after retention intervals of sleep or wakefulness. Adopting a predictive-coding approach, we found increased prediction strength after sleep, as expressed by increased error rates to deviant stimuli, but fewer errors for the immediately following standard stimuli. Sleep likewise enhanced the formation of an abstract sequence model, independent of the temporal context during training. Moreover, sleep increased confidence for sequence knowledge, reflecting enhanced metacognitive access to the model. Our results suggest that sleep supports the formation of internal models which can be used to predict upcoming events in different contexts.SIGNIFICANCE STATEMENT To efficiently interact with the ever-changing world, we predict upcoming events based on similar previous experiences. Sleep is known to benefit memory consolidation. However, it is not clear whether sleep specifically supports the transformation of past experience into predictions of future events. Here, we find that, when human subjects sleep after learning a sequence of predictable visual events, they make better predictions about upcoming events compared with subjects who stayed awake for an equivalent period of time. In addition, sleep supports the transfer of such knowledge between different temporal contexts (i.e., when sequences unfold at different speeds). Thus, sleep supports perception and action by enhancing the predictive utility of previous experiences.
Collapse
|
64
|
How Do Expectations Shape Perception? Trends Cogn Sci 2018; 22:764-779. [PMID: 30122170 DOI: 10.1016/j.tics.2018.06.002] [Citation(s) in RCA: 389] [Impact Index Per Article: 64.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Revised: 06/08/2018] [Accepted: 06/11/2018] [Indexed: 12/11/2022]
Abstract
Perception and perceptual decision-making are strongly facilitated by prior knowledge about the probabilistic structure of the world. While the computational benefits of using prior expectation in perception are clear, there are myriad ways in which this computation can be realized. We review here recent advances in our understanding of the neural sources and targets of expectations in perception. Furthermore, we discuss Bayesian theories of perception that prescribe how an agent should integrate prior knowledge and sensory information, and investigate how current and future empirical data can inform and constrain computational frameworks that implement such probabilistic integration in perception.
Collapse
|
65
|
Abstract
Early blindness causes fundamental alterations of neural function across more than 25% of cortex-changes that span the gamut from metabolism to behavior and collectively represent one of the most dramatic examples of plasticity in the human brain. The goal of this review is to describe how the remarkable behavioral and neuroanatomical compensations demonstrated by blind individuals provide insights into the extent, mechanisms, and limits of human brain plasticity.
Collapse
Affiliation(s)
- Ione Fine
- Department of Psychology, University of Washington, Seattle, Washington 98195, USA;
| | - Ji-Min Park
- Department of Psychology, University of Washington, Seattle, Washington 98195, USA;
| |
Collapse
|
66
|
Nanay B. The Importance of Amodal Completion in Everyday Perception. Iperception 2018; 9:2041669518788887. [PMID: 30109014 PMCID: PMC6083800 DOI: 10.1177/2041669518788887] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Accepted: 06/24/2018] [Indexed: 11/16/2022] Open
Abstract
Amodal completion is the representation of those parts of the perceived object that we get no sensory stimulation from. In the case of vision, it is the representation of occluded parts of objects we see: When we see a cat behind a picket fence, our perceptual system represents those parts of the cat that are occluded by the picket fence. The aim of this piece is to argue that amodal completion plays a constitutive role in our everyday perception and trace the theoretical consequences of this claim.
Collapse
Affiliation(s)
- Bence Nanay
- University of Antwerp, Belgium; Peterhouse, Cambridge University, UK
| |
Collapse
|
67
|
Smith FW, Rossit S. Identifying and detecting facial expressions of emotion in peripheral vision. PLoS One 2018; 13:e0197160. [PMID: 29847562 PMCID: PMC5976168 DOI: 10.1371/journal.pone.0197160] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2017] [Accepted: 04/27/2018] [Indexed: 11/24/2022] Open
Abstract
Facial expressions of emotion are signals of high biological value. Whilst recognition of facial expressions has been much studied in central vision, the ability to perceive these signals in peripheral vision has only seen limited research to date, despite the potential adaptive advantages of such perception. In the present experiment, we investigate facial expression recognition and detection performance for each of the basic emotions (plus neutral) at up to 30 degrees of eccentricity. We demonstrate, as expected, a decrease in recognition and detection performance with increasing eccentricity, with happiness and surprised being the best recognized expressions in peripheral vision. In detection however, while happiness and surprised are still well detected, fear is also a well detected expression. We show that fear is a better detected than recognized expression. Our results demonstrate that task constraints shape the perception of expression in peripheral vision and provide novel evidence that detection and recognition rely on partially separate underlying mechanisms, with the latter more dependent on the higher spatial frequency content of the face stimulus.
Collapse
Affiliation(s)
- Fraser W. Smith
- School of Psychology, University of East Anglia, Norwich, United Kingdom
| | - Stephanie Rossit
- School of Psychology, University of East Anglia, Norwich, United Kingdom
| |
Collapse
|
68
|
Mapping Frequency-Specific Tone Predictions in the Human Auditory Cortex at High Spatial Resolution. J Neurosci 2018; 38:4934-4942. [PMID: 29712781 DOI: 10.1523/jneurosci.2205-17.2018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2017] [Revised: 02/26/2018] [Accepted: 03/01/2018] [Indexed: 11/21/2022] Open
Abstract
Auditory inputs reaching our ears are often incomplete, but our brains nevertheless transform them into rich and complete perceptual phenomena such as meaningful conversations or pleasurable music. It has been hypothesized that our brains extract regularities in inputs, which enables us to predict the upcoming stimuli, leading to efficient sensory processing. However, it is unclear whether tone predictions are encoded with similar specificity as perceived signals. Here, we used high-field fMRI to investigate whether human auditory regions encode one of the most defining characteristics of auditory perception: the frequency of predicted tones. Two pairs of tone sequences were presented in ascending or descending directions, with the last tone omitted in half of the trials. Every pair of incomplete sequences contained identical sounds, but was associated with different expectations about the last tone (a high- or low-frequency target). This allowed us to disambiguate predictive signaling from sensory-driven processing. We recorded fMRI responses from eight female participants during passive listening to complete and incomplete sequences. Inspection of specificity and spatial patterns of responses revealed that target frequencies were encoded similarly during their presentations, as well as during omissions, suggesting frequency-specific encoding of predicted tones in the auditory cortex (AC). Importantly, frequency specificity of predictive signaling was observed already at the earliest levels of auditory cortical hierarchy: in the primary AC. Our findings provide evidence for content-specific predictive processing starting at the earliest cortical levels.SIGNIFICANCE STATEMENT Given the abundance of sensory information around us in any given moment, it has been proposed that our brain uses contextual information to prioritize and form predictions about incoming signals. However, there remains a surprising lack of understanding of the specificity and content of such prediction signaling; for example, whether a predicted tone is encoded with similar specificity as a perceived tone. Here, we show that early auditory regions encode the frequency of a tone that is predicted yet omitted. Our findings contribute to the understanding of how expectations shape sound processing in the human auditory cortex and provide further insights into how contextual information influences computations in neuronal circuits.
Collapse
|
69
|
Greening SG, Mitchell DG, Smith FW. Spatially generalizable representations of facial expressions: Decoding across partial face samples. Cortex 2018; 101:31-43. [DOI: 10.1016/j.cortex.2017.11.016] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2017] [Revised: 11/02/2017] [Accepted: 11/28/2017] [Indexed: 10/18/2022]
|
70
|
Pitzalis S, Strappini F, Bultrini A, Di Russo F. Detailed spatiotemporal brain mapping of chromatic vision combining high-resolution VEP with fMRI and retinotopy. Hum Brain Mapp 2018. [PMID: 29536594 DOI: 10.1002/hbm.24046] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023] Open
Abstract
Neuroimaging studies have identified so far, several color-sensitive visual areas in the human brain, and the temporal dynamics of these activities have been separately investigated using the visual-evoked potentials (VEPs). In the present study, we combined electrophysiological and neuroimaging methods to determine a detailed spatiotemporal profile of chromatic VEP and to localize its neural generators. The accuracy of the present co-registration study was obtained by combining standard fMRI data with retinotopic and motion mapping data at the individual level. We found a sequence of occipito activities more complex than that typically reported for chromatic VEPs, including feed-forward and reentrant feedback. Results showed that chromatic human perception arises by the combined activity of at the least five parieto-occipital areas including V1, LOC, V8/VO, and the motion-sensitive dorsal region MT+. However, the contribution of V1 and V8/VO seems dominant because the re-entrant activity in these areas was present more than once (twice in V8/VO and thrice in V1). This feedforward and feedback chromatic processing appears delayed compared with the luminance processing. Associating VEPs and neuroimaging measures, we showed for the first time a complex spatiotemporal pattern of activity, confirming that chromatic stimuli produce intricate interactions of many different brain dorsal and ventral areas.
Collapse
Affiliation(s)
- Sabrina Pitzalis
- Department of Movement, Human and Health Sciences, University of Rome "Foro Italico,", Rome, Italy.,Santa Lucia Foundation, IRCCS, Rome, Italy
| | | | - Alessandro Bultrini
- Department of Movement, Human and Health Sciences, University of Rome "Foro Italico,", Rome, Italy
| | - Francesco Di Russo
- Department of Movement, Human and Health Sciences, University of Rome "Foro Italico,", Rome, Italy.,Santa Lucia Foundation, IRCCS, Rome, Italy
| |
Collapse
|
71
|
Schindler A, Bartels A. Connectivity Reveals Sources of Predictive Coding Signals in Early Visual Cortex During Processing of Visual Optic Flow. Cereb Cortex 2018; 27:2885-2893. [PMID: 27222382 DOI: 10.1093/cercor/bhw136] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Superimposed on the visual feed-forward pathway, feedback connections convey higher level information to cortical areas lower in the hierarchy. A prominent framework for these connections is the theory of predictive coding where high-level areas send stimulus interpretations to lower level areas that compare them with sensory input. Along these lines, a growing body of neuroimaging studies shows that predictable stimuli lead to reduced blood oxygen level-dependent (BOLD) responses compared with matched nonpredictable counterparts, especially in early visual cortex (EVC) including areas V1-V3. The sources of these modulatory feedback signals are largely unknown. Here, we re-examined the robust finding of relative BOLD suppression in EVC evident during processing of coherent compared with random motion. Using functional connectivity analysis, we show an optic flow-dependent increase of functional connectivity between BOLD suppressed EVC and a network of visual motion areas including MST, V3A, V6, the cingulate sulcus visual area (CSv), and precuneus (Pc). Connectivity decreased between EVC and 2 areas known to encode heading direction: entorhinal cortex (EC) and retrosplenial cortex (RSC). Our results provide first evidence that BOLD suppression in EVC for predictable stimuli is indeed mediated by specific high-level areas, in accord with the theory of predictive coding.
Collapse
Affiliation(s)
- Andreas Schindler
- Vision and Cognition Lab, Centre for Integrative Neuroscience and.,Department of Psychology, University of Tübingen, Tübingen 72076, Germany.,Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany
| | - Andreas Bartels
- Vision and Cognition Lab, Centre for Integrative Neuroscience and.,Department of Psychology, University of Tübingen, Tübingen 72076, Germany.,Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany
| |
Collapse
|
72
|
Revina Y, Petro LS, Muckli L. Cortical feedback signals generalise across different spatial frequencies of feedforward inputs. Neuroimage 2017; 180:280-290. [PMID: 28951158 DOI: 10.1016/j.neuroimage.2017.09.047] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2017] [Revised: 09/01/2017] [Accepted: 09/21/2017] [Indexed: 11/26/2022] Open
Abstract
Visual processing in cortex relies on feedback projections contextualising feedforward information flow. Primary visual cortex (V1) has small receptive fields and processes feedforward information at a fine-grained spatial scale, whereas higher visual areas have larger, spatially invariant receptive fields. Therefore, feedback could provide coarse information about the global scene structure or alternatively recover fine-grained structure by targeting small receptive fields in V1. We tested if feedback signals generalise across different spatial frequencies of feedforward inputs, or if they are tuned to the spatial scale of the visual scene. Using a partial occlusion paradigm, functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) we investigated whether feedback to V1 contains coarse or fine-grained information by manipulating the spatial frequency of the scene surround outside an occluded image portion. We show that feedback transmits both coarse and fine-grained information as it carries information about both low (LSF) and high spatial frequencies (HSF). Further, feedback signals containing LSF information are similar to feedback signals containing HSF information, even without a large overlap in spatial frequency bands of the HSF and LSF scenes. Lastly, we found that feedback carries similar information about the spatial frequency band across different scenes. We conclude that cortical feedback signals contain information which generalises across different spatial frequencies of feedforward inputs.
Collapse
Affiliation(s)
- Yulia Revina
- Centre for Cognitive Neuroimaging, Institute of Neuroscience & Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, G12 8QB, UK.
| | - Lucy S Petro
- Centre for Cognitive Neuroimaging, Institute of Neuroscience & Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, G12 8QB, UK.
| | - Lars Muckli
- Centre for Cognitive Neuroimaging, Institute of Neuroscience & Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, G12 8QB, UK.
| |
Collapse
|
73
|
Spoerer CJ, McClure P, Kriegeskorte N. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition. Front Psychol 2017; 8:1551. [PMID: 28955272 PMCID: PMC5600938 DOI: 10.3389/fpsyg.2017.01551] [Citation(s) in RCA: 78] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Accepted: 08/25/2017] [Indexed: 11/13/2022] Open
Abstract
Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.
Collapse
Affiliation(s)
- Courtney J Spoerer
- Medical Research Council Cognition and Brain Sciences Unit, University of CambridgeCambridge, United Kingdom
| | - Patrick McClure
- Medical Research Council Cognition and Brain Sciences Unit, University of CambridgeCambridge, United Kingdom
| | - Nikolaus Kriegeskorte
- Medical Research Council Cognition and Brain Sciences Unit, University of CambridgeCambridge, United Kingdom
| |
Collapse
|
74
|
Predictions Shape Confidence in Right Inferior Frontal Gyrus. J Neurosci 2017; 36:10323-10336. [PMID: 27707969 DOI: 10.1523/jneurosci.1092-16.2016] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Accepted: 08/22/2016] [Indexed: 12/21/2022] Open
Abstract
It is clear that prior expectations shape perceptual decision-making, yet their contribution to the construction of subjective decision confidence remains largely unexplored. We recorded fMRI data while participants made perceptual decisions and confidence judgments, manipulating perceptual prior expectations while controlling for potential confounds of attention. Results show that subjective confidence increases as expectations increasingly support the decision, and that this relationship is associated with BOLD activity in right inferior frontal gyrus (rIFG). Specifically, rIFG is sensitive to the discrepancy between expectation and decision (mismatch), and higher mismatch responses are associated with lower decision confidence. Connectivity analyses revealed expectancy information to be represented in bilateral orbitofrontal cortex and sensory signals to be represented in intracalcarine sulcus. Together, our results indicate that predictive information is integrated into subjective confidence in rIFG, and reveal an occipital-frontal network that constructs confidence from top-down and bottom-up signals. This interpretation was further supported by exploratory findings that the white matter density of right orbitofrontal cortex negatively predicted its respective contribution to the construction of confidence. Our findings advance our understanding of the neural basis of subjective perceptual processes by revealing an occipitofrontal functional network that integrates prior beliefs into the construction of confidence. SIGNIFICANCE STATEMENT Perceptual decision-making is typically conceived as an integration of bottom-up and top-down influences. However, perceptual decisions are accompanied by a sense of confidence. Confidence is an important facet of perceptual consciousness yet remains poorly understood. Here we implicate right inferior frontal gyrus in constructing confidence from the discrepancy between perceptual judgment and its prior probability. Furthermore, we place right inferior frontal gyrus within an occipitofrontal network, consisting of orbitofrontal cortex and intracalcarine sulcus, which represents and communicates relevant top-down and bottom-up signals. Together, our data reveal a role of frontal regions in the top-down processes enabling perceptual decisions to become available for conscious report.
Collapse
|
75
|
Bannert MM, Bartels A. Invariance of surface color representations across illuminant changes in the human cortex. Neuroimage 2017; 158:356-370. [PMID: 28673878 DOI: 10.1016/j.neuroimage.2017.06.079] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2016] [Revised: 06/16/2017] [Accepted: 06/29/2017] [Indexed: 11/24/2022] Open
Abstract
A central problem in color vision is that the light reaching the eye from a given surface can vary dramatically depending on the illumination. Despite this, our color percept, the brain's estimate of surface reflectance, remains remarkably stable. This phenomenon is called color constancy. Here we investigated which human brain regions represent surface color in a way that is invariant with respect to illuminant changes. We used physically realistic rendering methods to display natural yet abstract 3D scenes that were displayed under three distinct illuminants. The scenes embedded, in different conditions, surfaces that differed in their surface color (i.e. in their reflectance property). We used multivariate fMRI pattern analysis to probe neural coding of surface reflectance and illuminant, respectively. While all visual regions encoded surface color when viewed under the same illuminant, we found that only in V1 and V4α surface color representations were invariant to illumination changes. Along the visual hierarchy there was a gradient from V1 to V4α to increasingly encode surface color rather than illumination. Finally, effects of a stimulus manipulation on individual behavioral color constancy indices correlated with neural encoding of the illuminant in hV4. This provides neural evidence for the Equivalent Illuminant Model. Our results provide a principled characterization of color constancy mechanisms across the visual hierarchy, and demonstrate complementary contributions in early and late processing stages.
Collapse
Affiliation(s)
- Michael M Bannert
- Vision and Cognition Lab, Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, 72076 Tübingen, Germany; Bernstein Center for Computational Neuroscience, 72076 Tübingen, Germany; Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany; Department of Psychology, University of Tübingen, 72076 Tübingen, Germany; International Max Planck Research School for Cognitive and Systems Neuroscience, 72076 Tübingen, Germany.
| | - Andreas Bartels
- Vision and Cognition Lab, Werner Reichardt Centre for Integrative Neuroscience, University of Tübingen, 72076 Tübingen, Germany; Bernstein Center for Computational Neuroscience, 72076 Tübingen, Germany; Max Planck Institute for Biological Cybernetics, 72076 Tübingen, Germany; Department of Psychology, University of Tübingen, 72076 Tübingen, Germany.
| |
Collapse
|
76
|
Herde L, Rossi V, Pourtois G, Rauss K. Early retinotopic responses to violations of emotion-location associations may depend on conscious awareness. Cogn Neurosci 2017; 9:38-55. [PMID: 28580835 DOI: 10.1080/17588928.2017.1338250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Reports of modulations of early visual processing suggest that retinotopic visual cortex may actively predict upcoming stimuli. We tested this idea by showing healthy human participants images of human faces at fixation, with different emotional expressions predicting stimuli in either the upper or the lower visual field. On infrequent test trials, emotional faces were followed by combined stimulation of upper and lower visual fields, thus violating previously established associations. Results showed no effects of such violations at the level of the retinotopic C1 of the visual evoked potential over the full sample. However, when separating participants who became aware of these associations from those who did not, we observed significant group differences during extrastriate processing of emotional faces, with inverse solution results indicating stronger activity in unaware subjects throughout the ventral visual stream. Moreover, within-group comparisons showed that the same peripheral stimuli elicited differential activity patterns during the C1 interval, depending on which stimulus elements were predictable. This effect was selectively observed in manipulation-aware subjects. Our results provide preliminary evidence for the notion that early visual processing stages implement predictions of upcoming events. They also point to conscious awareness as a moderator of predictive coding.
Collapse
Affiliation(s)
- Laura Herde
- a Institute of Medical Psychology and Behavioral Neurobiology , University of Tübingen , Tübingen , Germany
| | - Valentina Rossi
- b Cognitive & Affective Psychophysiology Laboratory, Department of Experimental Clinical and Health Psychology , Ghent University , Ghent , Belgium
| | - Gilles Pourtois
- b Cognitive & Affective Psychophysiology Laboratory, Department of Experimental Clinical and Health Psychology , Ghent University , Ghent , Belgium
| | - Karsten Rauss
- a Institute of Medical Psychology and Behavioral Neurobiology , University of Tübingen , Tübingen , Germany
| |
Collapse
|
77
|
Fazekas P, Nanay B. Pre-Cueing Effects: Attention or Mental Imagery? Front Psychol 2017; 8:222. [PMID: 28321195 PMCID: PMC5337501 DOI: 10.3389/fpsyg.2017.00222] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2016] [Accepted: 02/06/2017] [Indexed: 11/13/2022] Open
Affiliation(s)
- Peter Fazekas
- Philosophy and Cognitive Neuroscience Research Unit, Aarhus UniversityAarhus, Denmark; Centre for Philosophical Psychology, University of AntwerpAntwerp, Belgium
| | - Bence Nanay
- Department of Philosophy, University of AntwerpAntwerp, Belgium; Peterhouse, University of CambridgeCambridge, England
| |
Collapse
|
78
|
Abstract
What is the degree to which knowledge influences visual perceptual processes? This question, which is central to the seeing-versus-thinking debate in cognitive science, is often discussed using examples claimed to be proof of one stance or another. It has, however, also been muddled by the usage of different and unclear definitions of perception. Here, for the well-defined process of perceptual organization, I argue that including speed (or efficiency) into the equation opens a new perspective on the limits of top-down influences of thinking on seeing. While the input of the perceptual organization process may be modifiable and its output enrichable, the process itself seems so fast (or efficient) that thinking hardly has time to intrude and is effective mostly after the fact.
Collapse
|
79
|
Petro LS, Paton AT, Muckli L. Contextual modulation of primary visual cortex by auditory signals. Philos Trans R Soc Lond B Biol Sci 2017; 372:rstb.2016.0104. [PMID: 28044015 PMCID: PMC5206272 DOI: 10.1098/rstb.2016.0104] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/22/2016] [Indexed: 12/04/2022] Open
Abstract
Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol.23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol.24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’.
Collapse
Affiliation(s)
- L S Petro
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK
| | - A T Paton
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK
| | - L Muckli
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK
| |
Collapse
|
80
|
Kok P, van Lieshout LL, de Lange FP. Local expectation violations result in global activity gain in primary visual cortex. Sci Rep 2016; 6:37706. [PMID: 27874098 PMCID: PMC5118700 DOI: 10.1038/srep37706] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2016] [Accepted: 10/31/2016] [Indexed: 11/23/2022] Open
Abstract
During natural perception, we often form expectations about upcoming input. These expectations are usually multifaceted - we expect a particular object at a particular location. However, expectations about spatial location and stimulus features have mostly been studied in isolation, and it is unclear whether feature-based expectation can be spatially specific. Interestingly, feature-based attention automatically spreads to unattended locations. It is still an open question whether the neural mechanisms underlying feature-based expectation differ from those underlying feature-based attention. Therefore, establishing whether the effects of feature-based expectation are spatially specific may inform this debate. Here, we investigated this by inducing expectations of a specific stimulus feature at a specific location, and probing the effects on sensory processing across the visual field using fMRI. We found an enhanced sensory response for unexpected stimuli, which was elicited only when there was a violation of expectation at the specific location where participants formed a stimulus expectation. The neural consequences of this expectation violation, however, spread to cortical locations processing the stimulus in the opposite hemifield. This suggests that an expectation violation at one location in the visual world can lead to a spatially non-specific gain increase across the visual field.
Collapse
Affiliation(s)
- Peter Kok
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Kapittelweg 29, 6525 EN Nijmegen, The Netherlands
- Princeton University, Princeton Neuroscience Institute, 301 Peretsman-Scully Hall, Princeton, NJ 08544, US
| | - Lieke L.F. van Lieshout
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Kapittelweg 29, 6525 EN Nijmegen, The Netherlands
| | - Floris P. de Lange
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Kapittelweg 29, 6525 EN Nijmegen, The Netherlands
| |
Collapse
|
81
|
Denison RN, Sheynin J, Silver MA. Perceptual suppression of predicted natural images. J Vis 2016; 16:6. [PMID: 27802512 PMCID: PMC5098454 DOI: 10.1167/16.13.6] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Accepted: 08/19/2016] [Indexed: 01/12/2023] Open
Abstract
Perception is shaped not only by current sensory inputs but also by expectations generated from past sensory experience. Humans viewing ambiguous stimuli in a stable visual environment are generally more likely to see the perceptual interpretation that matches their expectations, but it is less clear how expectations affect perception when the environment is changing predictably. We used statistical learning to teach observers arbitrary sequences of natural images and employed binocular rivalry to measure perceptual selection as a function of predictive context. In contrast to previous demonstrations of preferential selection of predicted images for conscious awareness, we found that recently acquired sequence predictions biased perceptual selection toward unexpected natural images and image categories. These perceptual biases were not associated with explicit recall of the learned image sequences. Our results show that exposure to arbitrary sequential structure in the environment impacts subsequent visual perceptual selection and awareness. Specifically, for natural image sequences, the visual system prioritizes what is surprising, or statistically informative, over what is expected, or statistically likely.
Collapse
Affiliation(s)
- Rachel N Denison
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, US,
| | - Jacob Sheynin
- College of Letters and Science, University of California, Berkeley, Berkeley, CA,
| | - Michael A Silver
- Helen Wills Neuroscience Institute, Vision Science Graduate Group, and School of Optometry, University of California, Berkeley, Berkeley, CA, ://argentum.ucbso.berkeley.edu
| |
Collapse
|
82
|
Single-trial prediction of reaction time variability from MEG brain activity. Sci Rep 2016; 6:27416. [PMID: 27250872 PMCID: PMC4889999 DOI: 10.1038/srep27416] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2015] [Accepted: 05/18/2016] [Indexed: 11/08/2022] Open
Abstract
Neural activity prior to movement onset contains essential information for predictive assistance for humans using brain-machine-interfaces (BMIs). Even though previous studies successfully predicted different goals for upcoming movements, it is unclear whether non-invasive recording signals contain the information to predict trial-by-trial behavioral variability under the same movement. In this paper, we examined the predictability of subsequent short or long reaction times (RTs) from magnetoencephalography (MEG) signals in a delayed-reach task. The difference in RTs was classified significantly above chance from 550 ms before the go-signal onset using the cortical currents in the premotor cortex. Significantly above-chance classification was performed in the lateral prefrontal and the right inferior parietal cortices at the late stage of the delay period. Thus, inter-trial variability in RTs is predictable information. Our study provides a proof-of-concept of the future development of non-invasive BMIs to prevent delayed movements.
Collapse
|
83
|
Hindy NC, Ng FY, Turk-Browne NB. Linking pattern completion in the hippocampus to predictive coding in visual cortex. Nat Neurosci 2016; 19:665-667. [PMID: 27065363 PMCID: PMC4948994 DOI: 10.1038/nn.4284] [Citation(s) in RCA: 123] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2016] [Accepted: 03/10/2016] [Indexed: 12/31/2022]
Abstract
Models of predictive coding frame perception as a generative process in which expectations constrain sensory representations. These models account for expectations about how a stimulus will move or change from moment to moment, but do not address expectations about what other, distinct stimuli are likely to appear based on prior experience. We show that such memory-based expectations in human visual cortex are related to the hippocampal mechanism of pattern completion.
Collapse
Affiliation(s)
| | | | - Nicholas B. Turk-Browne
- Princeton Neuroscience Institute, Princeton University
- Department of Psychology, Princeton University
| |
Collapse
|
84
|
Predictive coding as a model of cognition. Cogn Process 2016; 17:279-305. [DOI: 10.1007/s10339-016-0765-6] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2015] [Accepted: 04/06/2016] [Indexed: 10/21/2022]
|
85
|
The brain's predictive prowess revealed in primary visual cortex. Proc Natl Acad Sci U S A 2016; 113:1124-5. [PMID: 26772315 DOI: 10.1073/pnas.1523834113] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
86
|
Heinzle J, Koopmans PJ, den Ouden HE, Raman S, Stephan KE. A hemodynamic model for layered BOLD signals. Neuroimage 2016; 125:556-570. [DOI: 10.1016/j.neuroimage.2015.10.025] [Citation(s) in RCA: 110] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2015] [Revised: 10/09/2015] [Accepted: 10/10/2015] [Indexed: 01/16/2023] Open
|
87
|
Reconstructing representations of dynamic visual objects in early visual cortex. Proc Natl Acad Sci U S A 2015; 113:1453-8. [PMID: 26712004 DOI: 10.1073/pnas.1512144113] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
As raw sensory data are partial, our visual system extensively fills in missing details, creating enriched percepts based on incomplete bottom-up information. Despite evidence for internally generated representations at early stages of cortical processing, it is not known whether these representations include missing information of dynamically transforming objects. Long-range apparent motion (AM) provides a unique test case because objects in AM can undergo changes both in position and in features. Using fMRI and encoding methods, we found that the "intermediate" orientation of an apparently rotating grating, never presented in the retinal input but interpolated during AM, is reconstructed in population-level, feature-selective tuning responses in the region of early visual cortex (V1) that corresponds to the retinotopic location of the AM path. This neural representation is absent when AM inducers are presented simultaneously and when AM is visually imagined. Our results demonstrate dynamic filling-in in V1 for object features that are interpolated during kinetic transformations.
Collapse
|
88
|
Kim K, Kim T, Yoon T, Lee C. Covariation between Spike and LFP Modulations Revealed with Focal and Asynchronous Stimulation of Receptive Field Surround in Monkey Primary Visual Cortex. PLoS One 2015; 10:e0144929. [PMID: 26670337 PMCID: PMC4682868 DOI: 10.1371/journal.pone.0144929] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2015] [Accepted: 11/26/2015] [Indexed: 11/18/2022] Open
Abstract
A focal visual stimulus outside the classical receptive field (RF) of a V1 neuron does not evoke a spike response by itself, and yet evokes robust changes in the local field potential (LFP). This subthreshold LFP provides a unique opportunity to investigate how changes induced by surround stimulation leads to modulation of spike activity. In the current study, two identical Gabor stimuli were sequentially presented with a variable stimulus onset asynchrony (SOA) ranging from 0 to 100 ms: the first (S1) outside the RF and the second (S2) over the RF of primary visual cortex neurons, while trained monkeys performed a fixation task. This focal and asynchronous stimulation of the RF surround enabled us to analyze the modulation of S2-evoked spike activity and covariation between spike and LFP modulation across SOA. In this condition, the modulation of S2-evoked spike response was dominantly facilitative and was correlated with the change in LFP amplitude, which was pronounced for the cells recorded in the upper cortical layers. The time course of covariation between the SOA-dependent spike modulation and LFP amplitude suggested that the subthreshold LFP evoked by the S1 can predict the magnitude of upcoming spike modulation.
Collapse
Affiliation(s)
- Kayeon Kim
- Department of Psychology, Seoul National University, Gwanak, Seoul, Korea
| | - Taekjun Kim
- Department of Psychology, Seoul National University, Gwanak, Seoul, Korea
| | - Taehwan Yoon
- Program of Cognitive Science, Seoul National University, Gwanak, Seoul, Korea
| | - Choongkil Lee
- Department of Psychology, Seoul National University, Gwanak, Seoul, Korea
- Program of Cognitive Science, Seoul National University, Gwanak, Seoul, Korea
- * E-mail:
| |
Collapse
|
89
|
van den Hurk J, Pegado F, Martens F, Op de Beeck HP. The Search for the Face of the Visual Homunculus. Trends Cogn Sci 2015; 19:638-641. [DOI: 10.1016/j.tics.2015.09.007] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2015] [Revised: 09/08/2015] [Accepted: 09/10/2015] [Indexed: 11/15/2022]
|
90
|
Muckli L, De Martino F, Vizioli L, Petro LS, Smith FW, Ugurbil K, Goebel R, Yacoub E. Contextual Feedback to Superficial Layers of V1. Curr Biol 2015; 25:2690-5. [PMID: 26441356 PMCID: PMC4612466 DOI: 10.1016/j.cub.2015.08.057] [Citation(s) in RCA: 219] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2015] [Revised: 07/27/2015] [Accepted: 08/25/2015] [Indexed: 11/17/2022]
Abstract
Neuronal cortical circuitry comprises feedforward, lateral, and feedback projections, each of which terminates in distinct cortical layers [1–3]. In sensory systems, feedforward processing transmits signals from the external world into the cortex, whereas feedback pathways signal the brain’s inference of the world [4–11]. However, the integration of feedforward, lateral, and feedback inputs within each cortical area impedes the investigation of feedback, and to date, no technique has isolated the feedback of visual scene information in distinct layers of healthy human cortex. We masked feedforward input to a region of V1 cortex and studied the remaining internal processing. Using high-resolution functional brain imaging (0.8 mm3) and multivoxel pattern information techniques, we demonstrate that during normal visual stimulation scene information peaks in mid-layers. Conversely, we found that contextual feedback information peaks in outer, superficial layers. Further, we found that shifting the position of the visual scene surrounding the mask parametrically modulates feedback in superficial layers of V1. Our results reveal the layered cortical organization of external versus internal visual processing streams during perception in healthy human subjects. We provide empirical support for theoretical feedback models such as predictive coding [10, 12] and coherent infomax [13] and reveal the potential of high-resolution fMRI to access internal processing in sub-millimeter human cortex. High-resolution MRI shows functional information patterns in non-stimulated V1 Non-stimulated V1 receives cortical feedback information to superficial layers Feedback to non-stimulated V1 superficial layers is predictive of visual context
Collapse
Affiliation(s)
- Lars Muckli
- Centre for Cognitive Neuroimaging (CCNi), Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, 58 Hillhead Street, G12 8QB Scotland, UK.
| | - Federico De Martino
- Faculty of Psychology and Neuroscience, Department of Cognitive Neurosciences, Maastricht University, Oxfordlaan 55, 6229 EV Maastricht, the Netherlands; Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, 2021 Sixth Street SE, Minneapolis, MN 55455, USA
| | - Luca Vizioli
- Centre for Cognitive Neuroimaging (CCNi), Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, 58 Hillhead Street, G12 8QB Scotland, UK
| | - Lucy S Petro
- Centre for Cognitive Neuroimaging (CCNi), Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, 58 Hillhead Street, G12 8QB Scotland, UK
| | - Fraser W Smith
- School of Psychology, University of East Anglia, Norwich Research Park, Norwich NR4 7TJ, UK
| | - Kamil Ugurbil
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, 2021 Sixth Street SE, Minneapolis, MN 55455, USA
| | - Rainer Goebel
- Faculty of Psychology and Neuroscience, Department of Cognitive Neurosciences, Maastricht University, Oxfordlaan 55, 6229 EV Maastricht, the Netherlands; Department of Neuroimaging and Neuromodeling, Netherlands Institute for Neuroscience, Meibergdreef 47, 1105 BA Amsterdam, the Netherlands
| | - Essa Yacoub
- Center for Magnetic Resonance Research (CMRR), Department of Radiology, University of Minnesota, 2021 Sixth Street SE, Minneapolis, MN 55455, USA
| |
Collapse
|
91
|
Abstract
Abstract
Despite considerable progress in the identification of the molecular targets of general anesthetics, it remains unclear how these drugs affect the brain at the systems level to suppress consciousness. According to recent proposals, anesthetics may achieve this feat by interfering with corticocortical top–down processes, that is, by interrupting information flow from association to early sensory cortices. Such a view entails two immediate questions. First, at which anatomical site, and by virtue of which physiological mechanism, do anesthetics interfere with top–down signals? Second, why does a breakdown of top–down signaling cause unconsciousness? While an answer to the first question can be gleaned from emerging neurophysiological evidence on dendritic signaling in cortical pyramidal neurons, a response to the second is offered by increasingly popular theoretical frameworks that place the element of prediction at the heart of conscious perception.
Collapse
|
92
|
Chan JL, Kucyi A, DeSouza JFX. Stable Task Representations under Attentional Load Revealed with Multivariate Pattern Analysis of Human Brain Activity. J Cogn Neurosci 2015; 27:1789-800. [PMID: 25941872 DOI: 10.1162/jocn_a_00819] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Performing multiple tasks concurrently places a load on limited attentional resources and results in disrupted task performance. Although human neuroimaging studies have investigated the neural correlates of attentional load, how attentional load affects task processing is poorly understood. Here, task-related neural activity was investigated using fMRI with conventional univariate analysis and multivariate pattern analysis (MVPA) while participants performed blocks of prosaccades and antisaccades, either with or without a rapid serial visual presentation (RSVP) task. Performing prosaccades and antisaccades with RSVP increased error rates and RTs, decreased mean activation in frontoparietal brain areas associated with oculomotor control, and eliminated differences in activation between prosaccades and antisaccades. However, task identity could be decoded from spatial patterns of activation both in the absence and presence of an attentional load. Furthermore, in the FEFs and intraparietal sulcus, these spatial representations were found to be similar using cross-trial-type MVPA, which suggests stability under attentional load. These results demonstrate that attentional load may disrupt the strength of task-related neural activity, rather than the identity of task representations.
Collapse
Affiliation(s)
| | - Aaron Kucyi
- University of Toronto.,Harvard Medical School.,Massachusetts General Hospital
| | | |
Collapse
|
93
|
Luft CDB, Meeson A, Welchman AE, Kourtzi Z. Decoding the future from past experience: learning shapes predictions in early visual cortex. J Neurophysiol 2015; 113:3159-71. [PMID: 25744884 PMCID: PMC4432681 DOI: 10.1152/jn.00753.2014] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2014] [Accepted: 02/25/2015] [Indexed: 11/22/2022] Open
Abstract
Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex.
Collapse
Affiliation(s)
- Caroline D B Luft
- Department of Psychology, Goldsmiths, University of London, London, United Kingdom
| | - Alan Meeson
- School of Psychology, University of Birmingham, Birmingham, United Kingdom; and
| | - Andrew E Welchman
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| | - Zoe Kourtzi
- Department of Psychology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
94
|
Petro LS, Vizioli L, Muckli L. Contributions of cortical feedback to sensory processing in primary visual cortex. Front Psychol 2014; 5:1223. [PMID: 25414677 PMCID: PMC4222340 DOI: 10.3389/fpsyg.2014.01223] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2014] [Accepted: 10/09/2014] [Indexed: 11/13/2022] Open
Abstract
Closing the structure-function divide is more challenging in the brain than in any other organ (Lichtman and Denk, 2011). For example, in early visual cortex, feedback projections to V1 can be quantified (e.g., Budd, 1998) but the understanding of feedback function is comparatively rudimentary (Muckli and Petro, 2013). Focusing on the function of feedback, we discuss how textbook descriptions mask the complexity of V1 responses, and how feedback and local activity reflects not only sensory processing but internal brain states.
Collapse
Affiliation(s)
- Lucy S Petro
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow Glasgow, UK
| | - Luca Vizioli
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow Glasgow, UK
| | - Lars Muckli
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow Glasgow, UK
| |
Collapse
|
95
|
Expectation in perceptual decision making: neural and computational mechanisms. Nat Rev Neurosci 2014; 15:745-56. [DOI: 10.1038/nrn3838] [Citation(s) in RCA: 461] [Impact Index Per Article: 46.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
96
|
Reinl M, Bartels A. Face processing regions are sensitive to distinct aspects of temporal sequence in facial dynamics. Neuroimage 2014; 102 Pt 2:407-15. [PMID: 25132020 DOI: 10.1016/j.neuroimage.2014.08.011] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2014] [Revised: 07/25/2014] [Accepted: 08/04/2014] [Indexed: 12/16/2022] Open
Abstract
Facial movement conveys important information for social interactions, yet its neural processing is poorly understood. Computational models propose that shape- and temporal sequence sensitive mechanisms interact in processing dynamic faces. While face processing regions are known to respond to facial movement, their sensitivity to particular temporal sequences has barely been studied. Here we used fMRI to examine the sensitivity of human face-processing regions to two aspects of directionality in facial movement trajectories. We presented genuine movie recordings of increasing and decreasing fear expressions, each of which were played in natural or reversed frame order. This two-by-two factorial design matched low-level visual properties, static content and motion energy within each factor, emotion-direction (increasing or decreasing emotion) and timeline (natural versus artificial). The results showed sensitivity for emotion-direction in FFA, which was timeline-dependent as it only occurred within the natural frame order, and sensitivity to timeline in the STS, which was emotion-direction-dependent as it only occurred for decreased fear. The occipital face area (OFA) was sensitive to the factor timeline. These findings reveal interacting temporal sequence sensitive mechanisms that are responsive to both ecological meaning and to prototypical unfolding of facial dynamics. These mechanisms are temporally directional, provide socially relevant information regarding emotional state or naturalness of behavior, and agree with predictions from modeling and predictive coding theory.
Collapse
Affiliation(s)
- Maren Reinl
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, and Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany
| | - Andreas Bartels
- Vision and Cognition Lab, Centre for Integrative Neuroscience, University of Tübingen, and Max Planck Institute for Biological Cybernetics, Tübingen 72076, Germany.
| |
Collapse
|
97
|
Vetter P, Newen A. Varieties of cognitive penetration in visual perception. Conscious Cogn 2014; 27:62-75. [DOI: 10.1016/j.concog.2014.04.007] [Citation(s) in RCA: 131] [Impact Index Per Article: 13.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2013] [Revised: 04/09/2014] [Accepted: 04/12/2014] [Indexed: 11/28/2022]
|
98
|
Vetter P, Smith FW, Muckli L. Decoding sound and imagery content in early visual cortex. Curr Biol 2014; 24:1256-62. [PMID: 24856208 PMCID: PMC4046224 DOI: 10.1016/j.cub.2014.04.020] [Citation(s) in RCA: 146] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2013] [Revised: 02/28/2014] [Accepted: 04/08/2014] [Indexed: 11/17/2022]
Abstract
Human early visual cortex was traditionally thought to process simple visual features such as orientation, contrast, and spatial frequency via feedforward input from the lateral geniculate nucleus (e.g., [1]). However, the role of nonretinal influence on early visual cortex is so far insufficiently investigated despite much evidence that feedback connections greatly outnumber feedforward connections [2–5]. Here, we explored in five fMRI experiments how information originating from audition and imagery affects the brain activity patterns in early visual cortex in the absence of any feedforward visual stimulation. We show that category-specific information from both complex natural sounds and imagery can be read out from early visual cortex activity in blindfolded participants. The coding of nonretinal information in the activity patterns of early visual cortex is common across actual auditory perception and imagery and may be mediated by higher-level multisensory areas. Furthermore, this coding is robust to mild manipulations of attention and working memory but affected by orthogonal, cognitively demanding visuospatial processing. Crucially, the information fed down to early visual cortex is category specific and generalizes to sound exemplars of the same category, providing evidence for abstract information feedback rather than precise pictorial feedback. Our results suggest that early visual cortex receives nonretinal input from other brain areas when it is generated by auditory perception and/or imagery, and this input carries common abstract information. Our findings are compatible with feedback of predictive information to the earliest visual input level (e.g., [6]), in line with predictive coding models [7–10]. Early visual cortex receives nonretinal input carrying abstract information Both auditory perception and imagery generate consistent top-down input Information feedback may be mediated by multisensory areas Feedback is robust to attentional, but not visuospatial, manipulation
Collapse
Affiliation(s)
- Petra Vetter
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK; Laboratory for Behavioral Neurology and Imaging of Cognition, Department of Neuroscience, Medical School and Swiss Center for Affective Sciences, University of Geneva, Campus Biotech, Case Postale 60, 1211 Geneva, Switzerland.
| | - Fraser W Smith
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK
| | - Lars Muckli
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, College of Medical, Veterinary and Life Sciences, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK.
| |
Collapse
|
99
|
Goesaert E, Van Baelen M, Spileers W, Wagemans J, Op de Beeck HP. Visual space and object space in the cerebral cortex of retinal disease patients. PLoS One 2014; 9:e88248. [PMID: 24505449 PMCID: PMC3914958 DOI: 10.1371/journal.pone.0088248] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2013] [Accepted: 01/06/2014] [Indexed: 11/29/2022] Open
Abstract
The lower areas of the hierarchically organized visual cortex are strongly retinotopically organized, with strong responses to specific retinotopic stimuli, and no response to other stimuli outside these preferred regions. Higher areas in the ventral occipitotemporal cortex show a weak eccentricity bias, and are mainly sensitive for object category (e.g., faces versus buildings). This study investigated how the mapping of eccentricity and category sensitivity using functional magnetic resonance imaging is affected by a retinal lesion in two very different low vision patients: a patient with a large central scotoma, affecting central input to the retina (juvenile macular degeneration), and a patient where input to the peripheral retina is lost (retinitis pigmentosa). From the retinal degeneration, we can predict specific losses of retinotopic activation. These predictions were confirmed when comparing stimulus activations with a no-stimulus fixation baseline. At the same time, however, seemingly contradictory patterns of activation, unexpected given the retinal degeneration, were observed when different stimulus conditions were directly compared. These unexpected activations were due to position-specific deactivations, indicating the importance of investigating absolute activation (relative to a no-stimulus baseline) rather than relative activation (comparing different stimulus conditions). Data from two controls, with simulated scotomas that matched the lesions in the two patients also showed that retinotopic mapping results could be explained by a combination of activations at the stimulated locations and deactivations at unstimulated locations. Category sensitivity was preserved in the two patients. In sum, when we take into account the full pattern of activations and deactivations elicited in retinotopic cortex and throughout the ventral object vision pathway in low vision patients, the pattern of (de)activation is consistent with the retinal loss.
Collapse
Affiliation(s)
- Elfi Goesaert
- Laboratory of Biological Psychology, University of Leuven (KU Leuven), Leuven, Belgium
| | - Marc Van Baelen
- Laboratory of Experimental Psychology, University of Leuven (KU Leuven), Leuven, Belgium
| | - Werner Spileers
- Division of Ophthalmology, University of Leuven (KU Leuven), Leuven, Belgium
| | - Johan Wagemans
- Laboratory of Experimental Psychology, University of Leuven (KU Leuven), Leuven, Belgium
| | - Hans P. Op de Beeck
- Laboratory of Biological Psychology, University of Leuven (KU Leuven), Leuven, Belgium
- * E-mail:
| |
Collapse
|
100
|
Teixeira M, Pires G, Raimundo M, Nascimento S, Almeida V, Castelo-Branco M. Robust single trial identification of conscious percepts triggered by sensory events of variable saliency. PLoS One 2014; 9:e86201. [PMID: 24465957 PMCID: PMC3900484 DOI: 10.1371/journal.pone.0086201] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2013] [Accepted: 12/06/2013] [Indexed: 12/03/2022] Open
Abstract
The neural correlates of visual awareness are elusive because of its fleeting nature. Here we have addressed this issue by using single trial statistical "brain reading" of neurophysiological event related (ERP) signatures of conscious perception of visual attributes with different levels of saliency. Behavioral reports were taken at every trial in 4 experiments addressing conscious access to color, luminance, and local phase offset cues. We found that single trial neurophysiological signatures of target presence can be observed around 300 ms at central parietal sites. Such signatures are significantly related with conscious perception, and their probability is related to sensory saliency levels. These findings identify a general neural correlate of conscious perception at the single trial level, since conscious perception can be decoded as such independently of stimulus salience and fluctuations of threshold levels. This approach can be generalized to successfully detect target presence in other individuals.
Collapse
Affiliation(s)
- Marta Teixeira
- Visual Neuroscience Laboratory, IBILI - Institute for Biomedical Imaging and Life Sciences, University of Coimbra, Coimbra, Portugal
- Institute for Nuclear Sciences Applied to Health (ICNAS), Brain Imaging Network of Portugal, University of Coimbra, Coimbra, Portugal
| | - Gabriel Pires
- Institute for Systems and Robotics (ISR), University of Coimbra, Coimbra, Portugal
| | - Miguel Raimundo
- Visual Neuroscience Laboratory, IBILI - Institute for Biomedical Imaging and Life Sciences, University of Coimbra, Coimbra, Portugal
- Institute for Nuclear Sciences Applied to Health (ICNAS), Brain Imaging Network of Portugal, University of Coimbra, Coimbra, Portugal
| | | | - Vasco Almeida
- Department of Physics, University of Beira Interior, Covilhã, Portugal
| | - Miguel Castelo-Branco
- Visual Neuroscience Laboratory, IBILI - Institute for Biomedical Imaging and Life Sciences, University of Coimbra, Coimbra, Portugal
- Institute for Nuclear Sciences Applied to Health (ICNAS), Brain Imaging Network of Portugal, University of Coimbra, Coimbra, Portugal
| |
Collapse
|