1
|
Cohen MA, Sung S, Alaoui Z. Familiarity Alters the Bandwidth of Perceptual Awareness. J Cogn Neurosci 2024; 36:1546-1556. [PMID: 38527082 DOI: 10.1162/jocn_a_02140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
Results from paradigms like change blindness and inattentional blindness indicate that observers are unaware of numerous aspects of the visual world. However, intuition suggests that perceptual experience is richer than these results indicate. Why does it feel like we see so much when the data suggests we see so little? One possibility stems from the fact that experimental studies always present observers with stimuli that they have never seen before. Meanwhile, when forming intuitions about perceptual experience, observers reflect on their experiences with scenes with which they are highly familiar (e.g., their office). Does prior experience with a scene change the bandwidth of perceptual awareness? Here, we asked if observers were better at noticing alterations to the periphery in familiar scenes compared with unfamiliar scenes. We found that observers noticed changes to the periphery more frequently with familiar stimuli. Signal detection theoretic analyses revealed that when observers are unfamiliar with a stimulus, they are less sensitive at noticing (d') and are more conservative in their response criterion (c). Taken together, these results suggest that prior knowledge expands the bandwidth of perceptual awareness. It should be stressed that these results challenge the widely held idea that prior knowledge fills in perception. Overall, these findings highlight how prior knowledge plays an important role in determining the limits of perceptual experience and is an important factor to consider when attempting to reconcile the tension between empirical observation and personal introspection.
Collapse
|
2
|
Kóbor A, Janacsek K, Hermann P, Zavecz Z, Varga V, Csépe V, Vidnyánszky Z, Kovács G, Nemeth D. Finding Pattern in the Noise: Persistent Implicit Statistical Knowledge Impacts the Processing of Unpredictable Stimuli. J Cogn Neurosci 2024; 36:1239-1264. [PMID: 38683699 DOI: 10.1162/jocn_a_02173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
Humans can extract statistical regularities of the environment to predict upcoming events. Previous research recognized that implicitly acquired statistical knowledge remained persistent and continued to influence behavior even when the regularities were no longer present in the environment. Here, in an fMRI experiment, we investigated how the persistence of statistical knowledge is represented in the brain. Participants (n = 32) completed a visual, four-choice, RT task consisting of statistical regularities. Two types of blocks constantly alternated with one another throughout the task: predictable statistical regularities in one block type and unpredictable ones in the other. Participants were unaware of the statistical regularities and their changing distribution across the blocks. Yet, they acquired the statistical regularities and showed significant statistical knowledge at the behavioral level not only in the predictable blocks but also in the unpredictable ones, albeit to a smaller extent. Brain activity in a range of cortical and subcortical areas, including early visual cortex, the insula, the right inferior frontal gyrus, and the right globus pallidus/putamen contributed to the acquisition of statistical regularities. The right insula, inferior frontal gyrus, and hippocampus as well as the bilateral angular gyrus seemed to play a role in maintaining this statistical knowledge. The results altogether suggest that statistical knowledge could be exploited in a relevant, predictable context as well as transmitted to and retrieved in an irrelevant context without a predictable structure.
Collapse
Affiliation(s)
- Andrea Kóbor
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Hungary
| | - Karolina Janacsek
- Centre of Thinking and Learning, Institute for Lifecourse Development, School of Human Sciences, University of Greenwich, United Kingdom
- ELTE Eötvös Loránd University, Hungary
| | - Petra Hermann
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Hungary
| | | | - Vera Varga
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Hungary
- University of Pannonia, Hungary
| | - Valéria Csépe
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Hungary
- University of Pannonia, Hungary
| | - Zoltán Vidnyánszky
- Brain Imaging Centre, HUN-REN Research Centre for Natural Sciences, Hungary
| | | | - Dezso Nemeth
- INSERM, CRNL U1028 UMR5292, France
- ELTE Eötvös Loránd University & HUN-REN Research Centre for Natural Sciences, Hungary
- University of Atlántico Medio, Spain
| |
Collapse
|
3
|
Fu J, Hsiao CA. Decoding intelligence via symmetry and asymmetry. Sci Rep 2024; 14:12525. [PMID: 38822016 PMCID: PMC11143306 DOI: 10.1038/s41598-024-62906-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 05/22/2024] [Indexed: 06/02/2024] Open
Abstract
Humans use pictures to model the world. The structure of a picture maps to mind space to form a concept. When an internal structure matches the corresponding external structure, an observation functions. Whether effective or not, the observation is self-consistent. In epistemology, people often differ from each other in terms of whether a concept is probabilistic or certain. Based on the effect of the presented IG and pull anti algorithm, we attempt to provide a comprehensive answer to this problem. Using the characters of hidden structures, we explain the difference between the macro and micro levels and the same difference between semantics and probability. In addition, the importance of attention is highlighted through the combination of symmetry and asymmetry included and the mechanism of chaos and collapse revealed in the presented model. Because the subject is involved in the expression of the object, representationalism is not complete. However, people undoubtedly reach a consensus based on the objectivity of the representation. Finally, we suggest that emotions could be used to regulate cognition.
Collapse
Affiliation(s)
- Jianjing Fu
- College of Media Engineering, Communication University of Zhejiang, Hangzhou, China
| | - Ching-An Hsiao
- Fintech Engineering Technology Research Center, Guangdong University of Finance, Guangzhou, China.
| |
Collapse
|
4
|
Tipado Z, Kuypers KPC, Sorger B, Ramaekers JG. Visual hallucinations originating in the retinofugal pathway under clinical and psychedelic conditions. Eur Neuropsychopharmacol 2024; 85:10-20. [PMID: 38648694 DOI: 10.1016/j.euroneuro.2024.04.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 04/11/2024] [Accepted: 04/13/2024] [Indexed: 04/25/2024]
Abstract
Psychedelics like LSD (Lysergic acid diethylamide) and psilocybin are known to modulate perceptual modalities due to the activation of mostly serotonin receptors in specific cortical (e.g., visual cortex) and subcortical (e.g., thalamus) regions of the brain. In the visual domain, these psychedelic modulations often result in peculiar disturbances of viewed objects and light and sometimes even in hallucinations of non-existent environments, objects, and creatures. Although the underlying processes are poorly understood, research conducted over the past twenty years on the subjective experience of psychedelics details theories that attempt to explain these perceptual alterations due to a disruption of communication between cortical and subcortical regions. However, rare medical conditions in the visual system like Charles Bonnet syndrome that cause perceptual distortions may shed new light on the additional importance of the retinofugal pathway in psychedelic subjective experiences. Interneurons in the retina called amacrine cells could be the first site of visual psychedelic modulation and aid in disrupting the hierarchical structure of how humans perceive visual information. This paper presents an understanding of how the retinofugal pathway communicates and modulates visual information in psychedelic and clinical conditions. Therefore, we elucidate a new theory of psychedelic modulation in the retinofugal pathway.
Collapse
Affiliation(s)
- Zeus Tipado
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, the Netherlands; Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, the Netherlands.
| | - Kim P C Kuypers
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, the Netherlands
| | - Bettina Sorger
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, the Netherlands
| | - Johannes G Ramaekers
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, the Netherlands
| |
Collapse
|
5
|
Ju U, Wallraven C. Decoding the dynamic perception of risk and speed using naturalistic stimuli: A multivariate, whole-brain analysis. Hum Brain Mapp 2024; 45:e26652. [PMID: 38488473 PMCID: PMC10941534 DOI: 10.1002/hbm.26652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Revised: 02/20/2024] [Accepted: 02/25/2024] [Indexed: 03/18/2024] Open
Abstract
Time-resolved decoding of speed and risk perception in car driving is important for understanding the perceptual processes related to driving safety. In this study, we used an fMRI-compatible trackball with naturalistic stimuli to record dynamic ratings of perceived risk and speed and investigated the degree to which different brain regions were able to decode these. We presented participants with first-person perspective videos of cars racing on the same course. These videos varied in terms of subjectively perceived speed and risk profiles, as determined during a behavioral pilot. During the fMRI experiment, participants used the trackball to dynamically rate subjective risk in a first and speed in a second session and assessed overall risk and speed after watching each video. A standard multivariate correlation analysis based on these ratings revealed sparse decodability in visual areas only for the risk ratings. In contrast, the dynamic rating-based correlation analysis uncovered frontal, visual, and temporal region activation for subjective risk and dorsal visual stream and temporal region activation for subjectively perceived speed. Interestingly, further analyses showed that the brain regions for decoding risk changed over time, whereas those for decoding speed remained constant. Overall, our results demonstrate the advantages of time-resolved decoding to help our understanding of the dynamic networks associated with decoding risk and speed perception in realistic driving scenarios.
Collapse
Affiliation(s)
- Uijong Ju
- Department of Information DisplayKyung Hee UniversitySeoulSouth Korea
| | - Christian Wallraven
- Department of Brain and Cognitive EngineeringKorea UniversitySouth Korea
- Department of Artificial IntelligenceKorea UniversitySouth Korea
| |
Collapse
|
6
|
Monaco S, Menghi N, Crawford JD. Action-specific feature processing in the human cortex: An fMRI study. Neuropsychologia 2024; 194:108773. [PMID: 38142960 DOI: 10.1016/j.neuropsychologia.2023.108773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 11/29/2023] [Accepted: 12/18/2023] [Indexed: 12/26/2023]
Abstract
Sensorimotor integration involves feedforward and reentrant processing of sensory input. Grasp-related motor activity precedes and is thought to influence visual object processing. Yet, while the importance of reentrant feedback is well established in perception, the top-down modulations for action and the neural circuits involved in this process have received less attention. Do action-specific intentions influence the processing of visual information in the human cortex? Using a cue-separation fMRI paradigm, we found that action-specific instruction processing (manual alignment vs. grasp) became apparent only after the visual presentation of oriented stimuli, and occurred as early as in the primary visual cortex and extended to the dorsal visual stream, motor and premotor areas. Further, dorsal stream area aIPS, known to be involved in object manipulation, and the primary visual cortex showed task-related functional connectivity with frontal, parietal and temporal areas, consistent with the idea that reentrant feedback from dorsal and ventral visual stream areas modifies visual inputs to prepare for action. Importantly, both the task-dependent modulations and connections were linked specifically to the object presentation phase of the task, suggesting a role in processing the action goal. Our results show that intended manual actions have an early, pervasive, and differential influence on the cortical processing of vision.
Collapse
Affiliation(s)
- Simona Monaco
- CIMeC - Center for Mind/Brain Sciences, University of Trento, Rovereto (TN), Italy.
| | - Nicholas Menghi
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - J Douglas Crawford
- Center for Vision Research, York University, Toronto, Ontario M3J 1P3, Canada; Vision: Science to Applications (VISTA) Program, Neuroscience Graduate Diploma Program and Departments of Psychology, Biology, and Kinesiology and Health Science, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
7
|
Hodson R, Mehta M, Smith R. The empirical status of predictive coding and active inference. Neurosci Biobehav Rev 2024; 157:105473. [PMID: 38030100 DOI: 10.1016/j.neubiorev.2023.105473] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 10/27/2023] [Accepted: 11/16/2023] [Indexed: 12/01/2023]
Abstract
Research on predictive processing models has focused largely on two specific algorithmic theories: Predictive Coding for perception and Active Inference for decision-making. While these interconnected theories possess broad explanatory potential, they have only recently begun to receive direct empirical evaluation. Here, we review recent studies of Predictive Coding and Active Inference with a focus on evaluating the degree to which they are empirically supported. For Predictive Coding, we find that existing empirical evidence offers modest support. However, some positive results can also be explained by alternative feedforward (e.g., feature detection-based) models. For Active Inference, most empirical studies have focused on fitting these models to behavior as a means of identifying and explaining individual or group differences. While Active Inference models tend to explain behavioral data reasonably well, there has not been a focus on testing empirical validity of active inference theory per se, which would require formal comparison to other models (e.g., non-Bayesian or model-free reinforcement learning models). This review suggests that, while promising, a number of specific research directions are still necessary to evaluate the empirical adequacy and explanatory power of these algorithms.
Collapse
Affiliation(s)
| | | | - Ryan Smith
- Laureate Institute for Brain Research, USA.
| |
Collapse
|
8
|
Wu H, Zuo Z, Yuan Z, Zhou T, Zhuo Y, Zheng N, Chen B. Neural representation of gestalt grouping and attention effect in human visual cortex. J Neurosci Methods 2023; 399:109980. [PMID: 37783351 DOI: 10.1016/j.jneumeth.2023.109980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 08/29/2023] [Accepted: 09/29/2023] [Indexed: 10/04/2023]
Abstract
BACKGROUND The brain aggregates meaningless local sensory elements to form meaningful global patterns in a process called perceptual grouping. Current brain imaging studies have found that neural activities in V1 are modulated during visual grouping. However, how grouping is represented in each of the early visual areas, and how attention alters these representations, is still unknown. NEW METHOD We adopted MVPA to decode the specific content of perceptual grouping by comparing neural activity patterns between gratings and dot lattice stimuli which can be grouped with proximity law. Furthermore, we quantified the grouping effect by defining the strength of grouping, and assessed the effect of attention on grouping. RESULTS We found that activity patterns to proximity grouped stimuli in early visual areas resemble these to grating stimuli with the same orientations. This similarity exists even when there is no attention focused on the stimuli. The results also showed a progressive increase of representational strength of grouping from V1 to V3, and attention modulation to grouping is only significant in V3 among all the visual areas. COMPARISON WITH EXISTING METHODS Most of the previous work on perceptual grouping has focused on how activity amplitudes are modulated by grouping. Using MVPA, the present work successfully decoded the contents of neural activity patterns corresponding to proximity grouping stimuli, thus shed light on the availability of content-decoding approach in the research on perceptual grouping. CONCLUSIONS Our work found that the content of the neural activity patterns during perceptual grouping can be decoded in the early visual areas under both attended and unattended task, and provide novel evidence that there is a cascade processing for proximity grouping through V1 to V3. The strength of grouping was larger in V3 than in any other visual areas, and the attention modulation to the strength of grouping was only significant in V3 among all the visual areas, implying that V3 plays an important role in proximity grouping.
Collapse
Affiliation(s)
- Hao Wu
- School of Electrical Engineering, Xi'an University of Technology, Xi'an, Shaanxi 710048, China
| | - Zhentao Zuo
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China; University of the Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China.
| | - Zejian Yuan
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, Xi'an, Shaanxi 710049, China; Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
| | - Tiangang Zhou
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China; University of the Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
| | - Yan Zhuo
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China; University of the Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing, China
| | - Nanning Zheng
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, Xi'an, Shaanxi 710049, China; Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
| | - Badong Chen
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, Xi'an, Shaanxi 710049, China; Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China.
| |
Collapse
|
9
|
Menceloglu M, Nakayama K, Song JH. Radial bias alters high-level motion perception. Vision Res 2023; 209:108246. [PMID: 37149959 DOI: 10.1016/j.visres.2023.108246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 04/20/2023] [Accepted: 04/20/2023] [Indexed: 05/09/2023]
Abstract
The visual system involves various orientation and visual field anisotropies, one of which is a preference for radial orientations and motion directions. By radial, we mean those directions coursing symmetrically outward from the fovea into the periphery. This bias stems from anatomical and physiological substrates in the early visual system. We recently reported that this low-level visual anisotropy can alter perceived object orientation. Here, we report that radial bias can also alter another higher-level system, the perceived direction of apparent motion. We presented a bistable apparent motion quartet in the center of the screen while participants fixated on various locations around the quartet. Participants (N = 22) were strongly biased to see the motion direction that was radial with respect to their fixation, controlling for any biases with center fixation. This was observed using a vertical-horizontal quartet as well as an oblique quartet (45° rotated quartet). The latter allowed us to rule out the contribution of the hemisphere effect where motion across the midline is perceived less often. These results extend our earlier findings on perceived object orientation, showing that low-level structural aspects of the visual system alter yet another higher-level visual process, that of apparent motion perception.
Collapse
Affiliation(s)
- Melisa Menceloglu
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, RI, United States.
| | - Ken Nakayama
- Department of Psychology, University of California, Berkeley, CA, United States
| | - Joo-Hyun Song
- Department of Cognitive, Linguistic & Psychological Sciences, Brown University, Providence, RI, United States; Carney Institute for Brain Science, Brown University, Providence, RI, United States
| |
Collapse
|
10
|
Brenner E, van Straaten CAG, de Vries AJ, Baas TRD, Bröring KM, Smeets JBJ. How the timing of visual feedback influences goal-directed arm movements: delays and presentation rates. Exp Brain Res 2023; 241:1447-1457. [PMID: 37067561 PMCID: PMC10129945 DOI: 10.1007/s00221-023-06617-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 04/11/2023] [Indexed: 04/18/2023]
Abstract
Visual feedback normally helps guide movements to their goal. When moving one's hand, such guidance has to deal with a sensorimotor delay of about 100 ms. When moving a cursor, it also has to deal with a delay of tens of milliseconds that arises between the hand moving the mouse and the cursor moving on the screen. Moreover, the cursor is presented at a certain rate, so only positions corresponding with the position of the mouse at certain moments are presented. How does the additional delay and the rate at which cursor positions are updated influence how well the cursor can be guided to the goal? We asked participants to move a cursor to consecutive targets as quickly as they could. They did so for various additional delays and presentation rates. It took longer for the mouse to reach the target when the additional delay was longer. It also took longer when a lower presentation rate was achieved by not presenting the cursor all the time. The fraction of the time during which the cursor was present was more important than the rate at which the cursor's position was updated. We conclude that the way human arm movements are guided benefits from continuous access to recent visual feedback.
Collapse
Affiliation(s)
- Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT, Amsterdam, The Netherlands.
| | - Chris A G van Straaten
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT, Amsterdam, The Netherlands
| | - A Julia de Vries
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT, Amsterdam, The Netherlands
| | - Tobias R D Baas
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT, Amsterdam, The Netherlands
| | - Kirsten M Bröring
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT, Amsterdam, The Netherlands
| | - Jeroen B J Smeets
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081BT, Amsterdam, The Netherlands
| |
Collapse
|
11
|
From representations in predictive processing to degrees of representational features. Minds Mach (Dordr) 2022. [DOI: 10.1007/s11023-022-09599-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
AbstractWhilst the topic of representations is one of the key topics in philosophy of mind, it has only occasionally been noted that representations and representational features may be gradual. Apart from vague allusions, little has been said on what representational gradation amounts to and why it could be explanatorily useful. The aim of this paper is to provide a novel take on gradation of representational features within the neuroscientific framework of predictive processing. More specifically, we provide a gradual account of two features of structural representations: structural similarity and decoupling. We argue that structural similarity can be analysed in terms of two dimensions: number of preserved relations and state space granularity. Both dimensions can take on different values and hence render structural similarity gradual. We further argue that decoupling is gradual in two ways. First, we show that different brain areas are involved in decoupled cognitive processes to a greater or lesser degree depending on the cause (internal or external) of their activity. Second, and more importantly, we show that the degree of decoupling can be further regulated in some brain areas through precision weighting of prediction error. We lastly argue that gradation of decoupling (via precision weighting) and gradation of structural similarity (via state space granularity) are conducive to behavioural success.
Collapse
|
12
|
Xu Q, Shen J, Ran X, Tang H, Pan G, Liu JK. Robust Transcoding Sensory Information With Neural Spikes. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:1935-1946. [PMID: 34665741 DOI: 10.1109/tnnls.2021.3107449] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Neural coding, including encoding and decoding, is one of the key problems in neuroscience for understanding how the brain uses neural signals to relate sensory perception and motor behaviors with neural systems. However, most of the existed studies only aim at dealing with the continuous signal of neural systems, while lacking a unique feature of biological neurons, termed spike, which is the fundamental information unit for neural computation as well as a building block for brain-machine interface. Aiming at these limitations, we propose a transcoding framework to encode multi-modal sensory information into neural spikes and then reconstruct stimuli from spikes. Sensory information can be compressed into 10% in terms of neural spikes, yet re-extract 100% of information by reconstruction. Our framework can not only feasibly and accurately reconstruct dynamical visual and auditory scenes, but also rebuild the stimulus patterns from functional magnetic resonance imaging (fMRI) brain activities. More importantly, it has a superb ability of noise immunity for various types of artificial noises and background signals. The proposed framework provides efficient ways to perform multimodal feature representation and reconstruction in a high-throughput fashion, with potential usage for efficient neuromorphic computing in a noisy environment.
Collapse
|
13
|
van Kemenade BM, Wilbertz G, Müller A, Sterzer P. Non-stimulated regions in early visual cortex encode the contents of conscious visual perception. Hum Brain Mapp 2021; 43:1394-1402. [PMID: 34862702 PMCID: PMC8837582 DOI: 10.1002/hbm.25731] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 11/07/2021] [Accepted: 11/08/2021] [Indexed: 11/11/2022] Open
Abstract
Predictions shape our perception. The theory of predictive processing poses that our brains make sense of incoming sensory input by generating predictions, which are sent back from higher to lower levels of the processing hierarchy. These predictions are based on our internal model of the world and enable inferences about the hidden causes of the sensory input data. It has been proposed that conscious perception corresponds to the currently most probable internal model of the world. Accordingly, predictions influencing conscious perception should be fed back from higher to lower levels of the processing hierarchy. Here, we used functional magnetic resonance imaging and multivoxel pattern analysis to show that non‐stimulated regions of early visual areas contain information about the conscious perception of an ambiguous visual stimulus. These results indicate that early sensory cortices in the human brain receive predictive feedback signals that reflect the current contents of conscious perception.
Collapse
Affiliation(s)
- Bianca M van Kemenade
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK.,Department of Psychiatry and Psychotherapy, Philipps-University Marburg, Marburg, Germany.,Department of Psychiatry and Psychotherapy, Charité Campus Mitte, Berlin, Germany
| | - Gregor Wilbertz
- Department of Psychology, Freie Universität Berlin, Berlin, Germany.,Department of Psychiatry and Psychotherapy, Charité Campus Mitte, Berlin, Germany
| | - Annalena Müller
- Department of Experimental and Biological Psychology, University of Potsdam, Potsdam, Germany.,Department of Psychiatry and Psychotherapy, Charité Campus Mitte, Berlin, Germany
| | - Philipp Sterzer
- Department of Psychiatry and Psychotherapy, Charité Campus Mitte, Berlin, Germany
| |
Collapse
|
14
|
Resolving visual motion through perceptual gaps. Trends Cogn Sci 2021; 25:978-991. [PMID: 34489180 DOI: 10.1016/j.tics.2021.07.017] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 07/27/2021] [Accepted: 07/30/2021] [Indexed: 01/22/2023]
Abstract
Perceptual gaps can be caused by objects in the foreground temporarily occluding objects in the background or by eyeblinks, which briefly but frequently interrupt visual information. Resolving visual motion across perceptual gaps is particularly challenging, as object position changes during the gap. We examine how visual motion is maintained and updated through externally driven (occlusion) and internally driven (eyeblinks) perceptual gaps. Focusing on both phenomenology and potential mechanisms such as suppression, extrapolation, and integration, we present a framework for how perceptual gaps are resolved over space and time. We finish by highlighting critical questions and directions for future work.
Collapse
|
15
|
Abstract
Selectivity for many basic properties of visual stimuli, such as orientation, is thought to be organized at the scale of cortical columns, making it difficult or impossible to measure directly with noninvasive human neuroscience measurement. However, computational analyses of neuroimaging data have shown that selectivity for orientation can be recovered by considering the pattern of response across a region of cortex. This suggests that computational analyses can reveal representation encoded at a finer spatial scale than is implied by the spatial resolution limits of measurement techniques. This potentially opens up the possibility to study a much wider range of neural phenomena that are otherwise inaccessible through noninvasive measurement. However, as we review in this article, a large body of evidence suggests an alternative hypothesis to this superresolution account: that orientation information is available at the spatial scale of cortical maps and thus easily measurable at the spatial resolution of standard techniques. In fact, a population model shows that this orientation information need not even come from single-unit selectivity for orientation tuning, but instead can result from population selectivity for spatial frequency. Thus, a categorical error of interpretation can result whereby orientation selectivity can be confused with spatial frequency selectivity. This is similarly problematic for the interpretation of results from numerous studies of more complex representations and cognitive functions that have built upon the computational techniques used to reveal stimulus orientation. We suggest in this review that these interpretational ambiguities can be avoided by treating computational analyses as models of the neural processes that give rise to measurement. Building upon the modeling tradition in vision science using considerations of whether population models meet a set of core criteria is important for creating the foundation for a cumulative and replicable approach to making valid inferences from human neuroscience measurements. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Justin L Gardner
- Department of Psychology, Stanford University, Stanford, California 94305, USA;
| | - Elisha P Merriam
- Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, Maryland 20892, USA;
| |
Collapse
|
16
|
Svanera M, Morgan AT, Petro LS, Muckli L. A self-supervised deep neural network for image completion resembles early visual cortex fMRI activity patterns for occluded scenes. J Vis 2021; 21:5. [PMID: 34259828 PMCID: PMC8288063 DOI: 10.1167/jov.21.7.5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Accepted: 05/14/2021] [Indexed: 11/24/2022] Open
Abstract
The promise of artificial intelligence in understanding biological vision relies on the comparison of computational models with brain data with the goal of capturing functional principles of visual information processing. Convolutional neural networks (CNN) have successfully matched the transformations in hierarchical processing occurring along the brain's feedforward visual pathway, extending into ventral temporal cortex. However, we are still to learn if CNNs can successfully describe feedback processes in early visual cortex. Here, we investigated similarities between human early visual cortex and a CNN with encoder/decoder architecture, trained with self-supervised learning to fill occlusions and reconstruct an unseen image. Using representational similarity analysis (RSA), we compared 3T functional magnetic resonance imaging (fMRI) data from a nonstimulated patch of early visual cortex in human participants viewing partially occluded images, with the different CNN layer activations from the same images. Results show that our self-supervised image-completion network outperforms a classical object-recognition supervised network (VGG16) in terms of similarity to fMRI data. This work provides additional evidence that optimal models of the visual system might come from less feedforward architectures trained with less supervision. We also find that CNN decoder pathway activations are more similar to brain processing compared to encoder activations, suggesting an integration of mid- and low/middle-level features in early visual cortex. Challenging an artificial intelligence model to learn natural image representations via self-supervised learning and comparing them with brain data can help us to constrain our understanding of information processing, such as neuronal predictive coding.
Collapse
Affiliation(s)
- Michele Svanera
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, UK
| | - Andrew T Morgan
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, UK
| | - Lucy S Petro
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, UK
| | - Lars Muckli
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, UK
| |
Collapse
|
17
|
Abstract
Active inference is a first principle account of how autonomous agents operate in dynamic, nonstationary environments. This problem is also considered in reinforcement learning, but limited work exists on comparing the two approaches on the same discrete-state environments. In this letter, we provide (1) an accessible overview of the discrete-state formulation of active inference, highlighting natural behaviors in active inference that are generally engineered in reinforcement learning, and (2) an explicit discrete-state comparison between active inference and reinforcement learning on an OpenAI gym baseline. We begin by providing a condensed overview of the active inference literature, in particular viewing the various natural behaviors of active inference agents through the lens of reinforcement learning. We show that by operating in a pure belief-based setting, active inference agents can carry out epistemic exploration-and account for uncertainty about their environment-in a Bayes-optimal fashion. Furthermore, we show that the reliance on an explicit reward signal in reinforcement learning is removed in active inference, where reward can simply be treated as another observation we have a preference over; even in the total absence of rewards, agent behaviors are learned through preference learning. We make these properties explicit by showing two scenarios in which active inference agents can infer behaviors in reward-free environments compared to both Q-learning and Bayesian model-based reinforcement learning agents and by placing zero prior preferences over rewards and learning the prior preferences over the observations corresponding to reward. We conclude by noting that this formalism can be applied to more complex settings (e.g., robotic arm movement, Atari games) if appropriate generative models can be formulated. In short, we aim to demystify the behavior of active inference agents by presenting an accessible discrete state-space and time formulation and demonstrate these behaviors in a OpenAI gym environment, alongside reinforcement learning agents.
Collapse
Affiliation(s)
- Noor Sajid
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, WC1N 3AR, U.K.
| | - Philip J Ball
- Machine Learning Research Group, Department of Engineering Science, University of Oxford, Oxford OX1 3PJ, U.K.
| | - Thomas Parr
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, WC1N 3AR, U.K.
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, WC1N 3AR, U.K.
| |
Collapse
|
18
|
Neural responses to apparent motion can be predicted by responses to non-moving stimuli. Neuroimage 2020; 218:116973. [PMID: 32464291 PMCID: PMC7422841 DOI: 10.1016/j.neuroimage.2020.116973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Revised: 04/28/2020] [Accepted: 05/17/2020] [Indexed: 12/04/2022] Open
Abstract
When two objects are presented in alternation at two locations, they are seen as a single object moving from one location to the other. This apparent motion (AM) percept is experienced for objects located at short and also at long distances. However, current models cannot explain how the brain integrates information over large distances to create such long-range AM. This study investigates the neural markers of AM by parcelling out the contribution of spatial and temporal interactions not specific to motion. In two experiments, participants’ EEG was recorded while they viewed two stimuli inducing AM. Different combinations of these stimuli were also shown in a static context to predict an AM neural response where no motion is perceived. We compared the goodness of fit between these different predictions and found consistent results in both experiments. At short-range, the addition of the inhibitory spatial and temporal interactions not specific to motion improved the AM prediction. However, there was no indication that spatial or temporal non-linear interactions were present at long-range. This suggests that short- and long-range AM rely on different neural mechanisms. Importantly, our results also show that at both short- and long-range, responses generated by a moving stimulus could be well predicted from conditions in which no motion is perceived. That is, the EEG response to a moving stimulus is simply a combination of individual responses to non-moving stimuli. This demonstrates a dissociation between the brain response and the subjective percept of motion. EEG responses are inhibited by spatial and temporal stimulus interactions. These interactions are important for motion at short but not at long distances. We find no trace of a specific neural signature of motion perception. Neural responses to motion are well predicted by responses to non-moving stimuli.
Collapse
|
19
|
Walsh KS, McGovern DP, Clark A, O'Connell RG. Evaluating the neurophysiological evidence for predictive processing as a model of perception. Ann N Y Acad Sci 2020; 1464:242-268. [PMID: 32147856 PMCID: PMC7187369 DOI: 10.1111/nyas.14321] [Citation(s) in RCA: 105] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 01/21/2020] [Accepted: 02/03/2020] [Indexed: 12/12/2022]
Abstract
For many years, the dominant theoretical framework guiding research into the neural origins of perceptual experience has been provided by hierarchical feedforward models, in which sensory inputs are passed through a series of increasingly complex feature detectors. However, the long‐standing orthodoxy of these accounts has recently been challenged by a radically different set of theories that contend that perception arises from a purely inferential process supported by two distinct classes of neurons: those that transmit predictions about sensory states and those that signal sensory information that deviates from those predictions. Although these predictive processing (PP) models have become increasingly influential in cognitive neuroscience, they are also criticized for lacking the empirical support to justify their status. This limited evidence base partly reflects the considerable methodological challenges that are presented when trying to test the unique predictions of these models. However, a confluence of technological and theoretical advances has prompted a recent surge in human and nonhuman neurophysiological research seeking to fill this empirical gap. Here, we will review this new research and evaluate the degree to which its findings support the key claims of PP.
Collapse
Affiliation(s)
- Kevin S Walsh
- Trinity College Institute of Neuroscience and School of Psychology, Trinity College Dublin, Dublin, Ireland
| | - David P McGovern
- Trinity College Institute of Neuroscience and School of Psychology, Trinity College Dublin, Dublin, Ireland.,School of Psychology, Dublin City University, Dublin, Ireland
| | - Andy Clark
- Department of Philosophy, University of Sussex, Brighton, UK.,Department of Informatics, University of Sussex, Brighton, UK
| | - Redmond G O'Connell
- Trinity College Institute of Neuroscience and School of Psychology, Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
20
|
Ward B, Thornton A, Lay B, Chen N, Rosenberg M. Can Proficiency Criteria Be Accurately Identified During Real-Time Fundamental Movement Skill Assessment? RESEARCH QUARTERLY FOR EXERCISE AND SPORT 2020; 91:64-72. [PMID: 31479409 DOI: 10.1080/02701367.2019.1646852] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Accepted: 07/18/2019] [Indexed: 06/10/2023]
Abstract
Purpose: Fundamental movement skill (FMS) assessors in education environments rely upon real-time FMS assessment; however, the recognition of individual proficiency criteria during real-time process-oriented FMS assessment may be problematic. Few studies consider the accuracy of identifying individual proficiency criteria in process-oriented FMS assessment, even though criteria are relied upon for intervention planning. This study aimed to further understand assessors' ability to recognize proficiency criteria during real-time FMS assessment and the impact of assessor experience on assessment accuracy. Methods: 10 primary teachers, and 7 pediatric professionals assessed 10 performances of four FMSs (Jump, Hop, Kick, Throw) presented in videos and point-light displays using the Test of Gross Motor Development-2. Results: Accuracy in identifying proficiency criteria was moderate for both pediatric professionals (74.73%) and primary teachers (69.58%), with no differences between groups. In contrast, reliability of overall proficiency scores was good to excellent (ICC>0.8) in both groups. Some individual criteria may be more difficult to assess, evidenced by large average accuracy ranges within skills (e.g., 46% difference between Throw criteria 1 (34%) and 2 (80%)). Conclusions: The study reinforces the difficulty of observing proficiency criteria during real-time FMS assessment regardless of assessor experience. Results suggest that assessors can accurately score overall FMS proficiency, whilst accurate identification of proficiency criteria is problematic. Accurate criterion identification is crucial to understand skill deficiencies and inform subsequent intervention. Attentional demands during real-time assessment may be too great to allow accurate criterion identification, even by experienced assessors, which presents an important consideration for test administrators and developers.
Collapse
|
21
|
Jeong W, Kim S, Kim YJ, Lee J. Motion direction representation in multivariate electroencephalography activity for smooth pursuit eye movements. Neuroimage 2019; 202:116160. [PMID: 31491522 DOI: 10.1016/j.neuroimage.2019.116160] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 08/31/2019] [Accepted: 09/02/2019] [Indexed: 11/25/2022] Open
Abstract
Visually-guided smooth pursuit eye movements are composed of initial open-loop and later steady-state periods. Feedforward sensory information dominates the motor behavior during the open-loop pursuit, and a more complex feedback loop regulates the steady-state pursuit. To understand the neural representations of motion direction during open-loop and steady-state smooth pursuits, we recorded electroencephalography (EEG) responses from human observers while they tracked random-dot kinematograms as pursuit targets. We estimated population direction tuning curves from multivariate EEG activity using an inverted encoding model. We found significant direction tuning curves as early as about 60 ms from stimulus onset. Direction tuning responses were generalized to later times during the open-loop smooth pursuit, but they became more dynamic during the later steady-state pursuit. The encoding quality of retinal motion direction information estimated from the early direction tuning curves was predictive of trial-by-trial variation in initial pursuit directions. These results suggest that the movement directions of open-loop smooth pursuit are guided by the representation of the retinal motion present in the multivariate EEG activity.
Collapse
Affiliation(s)
- Woojae Jeong
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419, Republic of Korea; Department of Biomedical Engineering, Sungkyunkwan University, Suwon, 16419, Republic of Korea
| | - Seolmin Kim
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419, Republic of Korea; Department of Biomedical Engineering, Sungkyunkwan University, Suwon, 16419, Republic of Korea
| | - Yee-Joon Kim
- Center for Cognition and Sociality, Institute for Basic Science (IBS), Daejeon, 34126, Republic of Korea
| | - Joonyeol Lee
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, 16419, Republic of Korea; Department of Biomedical Engineering, Sungkyunkwan University, Suwon, 16419, Republic of Korea.
| |
Collapse
|
22
|
Towards a Unified View on Pathways and Functions of Neural Recurrent Processing. Trends Neurosci 2019; 42:589-603. [PMID: 31399289 DOI: 10.1016/j.tins.2019.07.005] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 06/21/2019] [Accepted: 07/11/2019] [Indexed: 11/20/2022]
Abstract
There are three neural feedback pathways to the primary visual cortex (V1): corticocortical, pulvinocortical, and cholinergic. What are the respective functions of these three projections? Possible functions range from contextual modulation of stimulus processing and feedback of high-level information to predictive processing (PP). How are these functions subserved by different pathways and can they be integrated into an overarching theoretical framework? We propose that corticocortical and pulvinocortical connections are involved in all three functions, whereas the role of cholinergic projections is limited by their slow response to stimuli. PP provides a broad explanatory framework under which stimulus-context modulation and high-level processing are subsumed, involving multiple feedback pathways that provide mechanisms for inferring and interpreting what sensory inputs are about.
Collapse
|
23
|
Du C, Du C, Huang L, He H. Reconstructing Perceived Images From Human Brain Activities With Bayesian Deep Multiview Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:2310-2323. [PMID: 30561354 DOI: 10.1109/tnnls.2018.2882456] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Neural decoding, which aims to predict external visual stimuli information from evoked brain activities, plays an important role in understanding human visual system. Many existing methods are based on linear models, and most of them only focus on either the brain activity pattern classification or visual stimuli identification. Accurate reconstruction of the perceived images from the measured human brain activities still remains challenging. In this paper, we propose a novel deep generative multiview model for the accurate visual image reconstruction from the human brain activities measured by functional magnetic resonance imaging (fMRI). Specifically, we model the statistical relationships between the two views (i.e., the visual stimuli and the evoked fMRI) by using two view-specific generators with a shared latent space. On the one hand, we adopt a deep neural network architecture for visual image generation, which mimics the stages of human visual processing. On the other hand, we design a sparse Bayesian linear model for fMRI activity generation, which can effectively capture voxel correlations, suppress data noise, and avoid overfitting. Furthermore, we devise an efficient mean-field variational inference method to train the proposed model. The proposed method can accurately reconstruct visual images via Bayesian inference. In particular, we exploit a posterior regularization technique in the Bayesian inference to regularize the model posterior. The quantitative and qualitative evaluations conducted on multiple fMRI data sets demonstrate the proposed method can reconstruct visual images more accurately than the state of the art.
Collapse
|
24
|
Gardner JL, Liu T. Inverted Encoding Models Reconstruct an Arbitrary Model Response, Not the Stimulus. eNeuro 2019; 6:ENEURO.0363-18.2019. [PMID: 30923743 PMCID: PMC6437661 DOI: 10.1523/eneuro.0363-18.2019] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 02/25/2019] [Accepted: 03/03/2019] [Indexed: 01/24/2023] Open
Abstract
Probing how large populations of neurons represent stimuli is key to understanding sensory representations as many stimulus characteristics can only be discerned from population activity and not from individual single-units. Recently, inverted encoding models have been used to produce channel response functions from large spatial-scale measurements of human brain activity that are reminiscent of single-unit tuning functions and have been proposed to assay "population-level stimulus representations" (Sprague et al., 2018a). However, these channel response functions do not assay population tuning. We show by derivation that the channel response function is only determined up to an invertible linear transform. Thus, these channel response functions are arbitrary, one of an infinite family and therefore not a unique description of population representation. Indeed, simulations demonstrate that bimodal, even random, channel basis functions can account perfectly well for population responses without any underlying neural response units that are so tuned. However, the approach can be salvaged by extending it to reconstruct the stimulus, not the assumed model. We show that when this is done, even using bimodal and random channel basis functions, a unimodal function peaking at the appropriate value of the stimulus is recovered which can be interpreted as a measure of population selectivity. More precisely, the recovered function signifies how likely any value of the stimulus is, given the observed population response. Whether an analysis is recovering the hypothetical responses of an arbitrary model rather than assessing the selectivity of population representations is not an issue unique to the inverted encoding model and human neuroscience, but a general problem that must be confronted as more complex analyses intervene between measurement of population activity and presentation of data.
Collapse
Affiliation(s)
| | - Taosheng Liu
- Department of Psychology, Michigan State University, East Lansing, MI 48824
| |
Collapse
|
25
|
Park BY, Tark KJ, Shim WM, Park H. Functional connectivity based parcellation of early visual cortices. Hum Brain Mapp 2017; 39:1380-1390. [PMID: 29250855 DOI: 10.1002/hbm.23926] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2017] [Revised: 11/18/2017] [Accepted: 12/10/2017] [Indexed: 11/10/2022] Open
Abstract
Human brain can be divided into multiple brain regions based on anatomical and functional properties. Recent studies showed that resting-state connectivity can be utilized for parcellating brain regions and identifying their distinctive roles. In this study, we aimed to parcellate the primary and secondary visual cortices (V1 and V2) into several subregions based on functional connectivity and to examine the functional characteristics of each subregion. We used resting-state data from a research database and also acquired resting-state data with retinotopy results from a local site. The long-range connectivity profile and three different algorithms (i.e., K-means, Gaussian mixture model distribution, and Ward's clustering algorithms) were adopted for the parcellation. We compared the parcellation results within V1 and V2 with the eccentric map in retinotopy. We found that the boundaries between subregions within V1 and V2 were located in the parafovea, indicating that the anterior and posterior subregions within V1 and V2 corresponded to peripheral and central visual field representations, respectively. Next, we computed correlations between each subregion within V1 and V2 and intermediate and high-order regions in ventral and dorsal visual pathways. We found that the anterior subregions of V1 and V2 were strongly associated with regions in the dorsal stream (V3A and inferior parietal gyrus), whereas the posterior subregions of V1 and V2 were highly related to regions in the ventral stream (V4v and inferior temporal gyrus). Our findings suggest that the anterior and posterior subregions of V1 and V2, parcellated based on functional connectivity, may have distinct functional properties.
Collapse
Affiliation(s)
- Bo-Yong Park
- Department of Electronic, Electrical and Computer Engineering, Sungkyunkwan University, Suwon, Korea.,Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, Korea
| | - Kyeong-Jin Tark
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, Korea
| | - Won Mok Shim
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, Korea.,Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Korea
| | - Hyunjin Park
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), Suwon, Korea.,School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, Korea
| |
Collapse
|
26
|
Inverted Encoding Models of Human Population Response Conflate Noise and Neural Tuning Width. J Neurosci 2017; 38:398-408. [PMID: 29167406 DOI: 10.1523/jneurosci.2453-17.2017] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2017] [Revised: 11/08/2017] [Accepted: 11/10/2017] [Indexed: 01/02/2023] Open
Abstract
Channel-encoding models offer the ability to bridge different scales of neuronal measurement by interpreting population responses, typically measured with BOLD imaging in humans, as linear sums of groups of neurons (channels) tuned for visual stimulus properties. Inverting these models to form predicted channel responses from population measurements in humans seemingly offers the potential to infer neuronal tuning properties. Here, we test the ability to make inferences about neural tuning width from inverted encoding models. We examined contrast invariance of orientation selectivity in human V1 (both sexes) and found that inverting the encoding model resulted in channel response functions that became broader with lower contrast, thus apparently violating contrast invariance. Simulations showed that this broadening could be explained by contrast-invariant single-unit tuning with the measured decrease in response amplitude at lower contrast. The decrease in response lowers the signal-to-noise ratio of population responses that results in poorer population representation of orientation. Simulations further showed that increasing signal to noise makes channel response functions less sensitive to underlying neural tuning width, and in the limit of zero noise will reconstruct the channel function assumed by the model regardless of the bandwidth of single units. We conclude that our data are consistent with contrast-invariant orientation tuning in human V1. More generally, our results demonstrate that population selectivity measures obtained by encoding models can deviate substantially from the behavior of single units because they conflate neural tuning width and noise and are therefore better used to estimate the uncertainty of decoded stimulus properties.SIGNIFICANCE STATEMENT It is widely recognized that perceptual experience arises from large populations of neurons, rather than a few single units. Yet, much theory and experiment have examined links between single units and perception. Encoding models offer a way to bridge this gap by explicitly interpreting population activity as the aggregate response of many single neurons with known tuning properties. Here we use this approach to examine contrast-invariant orientation tuning of human V1. We show with experiment and modeling that due to lower signal to noise, contrast-invariant orientation tuning of single units manifests in population response functions that broaden at lower contrast, rather than remain contrast-invariant. These results highlight the need for explicit quantitative modeling when making a reverse inference from population response profiles to single-unit responses.
Collapse
|
27
|
Abstract
Pain perception temporarily exaggerates abrupt thermal stimulus changes revealing a mechanism for nociceptive temporal contrast enhancement (TCE). Although the mechanism is unknown, a non-linear model with perceptual feedback accurately simulates the phenomenon. Here we test if a mechanism in the central nervous system underlies thermal TCE. Our model successfully predicted an optimal stimulus, incorporating a transient temperature offset (step-up/step-down), with maximal TCE, resulting in psychophysically verified large decrements in pain response ("offset-analgesia"; mean analgesia: 85%, n = 20 subjects). Next, this stimulus was delivered using two thermodes, one delivering the longer duration baseline temperature pulse and the other superimposing a short higher temperature pulse. The two stimuli were applied simultaneously either near or far on the same arm, or on opposite arms. Spatial separation across multiple peripheral receptive fields ensures the composite stimulus timecourse is first reconstituted in the central nervous system. Following ipsilateral stimulus cessation on the high temperature thermode, but before cessation of the low temperature stimulus properties of TCE were observed both for individual subjects and in group-mean responses. This demonstrates a central integration mechanism is sufficient to evoke painful thermal TCE, an essential step in transforming transient afferent nociceptive signals into a stable pain perception.
Collapse
|
28
|
Petro LS, Paton AT, Muckli L. Contextual modulation of primary visual cortex by auditory signals. Philos Trans R Soc Lond B Biol Sci 2017; 372:rstb.2016.0104. [PMID: 28044015 PMCID: PMC5206272 DOI: 10.1098/rstb.2016.0104] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/22/2016] [Indexed: 12/04/2022] Open
Abstract
Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol.23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol.24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’.
Collapse
Affiliation(s)
- L S Petro
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK
| | - A T Paton
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK
| | - L Muckli
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, UK
| |
Collapse
|
29
|
Abstract
Vision in the fovea, the center of the visual field, is much more accurate and detailed than vision in the periphery. This is not in line with the rich phenomenology of peripheral vision. Here, we investigated a visual illusion that shows that detailed peripheral visual experience is partially based on a reconstruction of reality. Participants fixated on the center of a visual display in which central stimuli differed from peripheral stimuli. Over time, participants perceived that the peripheral stimuli changed to match the central stimuli, so that the display seemed uniform. We showed that a wide range of visual features, including shape, orientation, motion, luminance, pattern, and identity, are susceptible to this uniformity illusion. We argue that the uniformity illusion is the result of a reconstruction of sparse visual information (from the periphery) based on more readily available detailed visual information (from the fovea), which gives rise to a rich, but illusory, experience of peripheral vision.
Collapse
|
30
|
Petro LS, Muckli L. The laminar integration of sensory inputs with feedback signals in human cortex. Brain Cogn 2016; 112:54-57. [PMID: 27814926 PMCID: PMC5312781 DOI: 10.1016/j.bandc.2016.06.007] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2015] [Revised: 06/23/2016] [Accepted: 06/24/2016] [Indexed: 11/25/2022]
Abstract
Understanding how the cortex integrates feedback and feedforward signals is central to understanding brain function. The data-driven framework of apical amplification which is hypothesized to have a central role in cognition is highlighted. Human neuroimaging data provides evidence for layer-specific cortical feedback relevant for theories of predictive feedback.
The cortex constitutes the largest area of the human brain. Yet we have only a basic understanding of how the cortex performs one vital function: the integration of sensory signals (carried by feedforward pathways) with internal representations (carried by feedback pathways). A multi-scale, multi-species approach is essential for understanding the site of integration, computational mechanism and functional role of this processing. To improve our knowledge we must rely on brain imaging with improved spatial and temporal resolution and paradigms which can measure internal processes in the human brain, and on the bridging of disciplines in order to characterize this processing at cellular and circuit levels. We highlight apical amplification as one potential mechanism for integrating feedforward and feedback inputs within pyramidal neurons in the rodent brain. We reflect on the challenges and progress in applying this model neuronal process to the study of human cognition. We conclude that cortical-layer specific measures in humans will be an essential contribution for better understanding the landscape of information in cortical feedback, helping to bridge the explanatory gap.
Collapse
Affiliation(s)
- Lucy S Petro
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, Scotland, United Kingdom.
| | - Lars Muckli
- Centre for Cognitive Neuroimaging, Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow G12 8QB, Scotland, United Kingdom.
| |
Collapse
|
31
|
Erlikhman G, Caplovitz GP. Decoding information about dynamically occluded objects in visual cortex. Neuroimage 2016; 146:778-788. [PMID: 27663987 DOI: 10.1016/j.neuroimage.2016.09.024] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2016] [Revised: 08/19/2016] [Accepted: 09/11/2016] [Indexed: 11/28/2022] Open
Abstract
During dynamic occlusion, an object passes behind an occluding surface and then later reappears. Even when completely occluded from view, such objects are experienced as continuing to exist or persist behind the occluder even though they are no longer visible. The contents and neural basis of this persistent representation remain poorly understood. Questions remain as to whether there is information maintained about the object itself (i.e. its shape or identity) or non-object-specific information such as its position or velocity as it is tracked behind an occluder, as well as which areas of visual cortex represent such information. Recent studies have found that early visual cortex is activated by "invisible" objects during visual imagery and by unstimulated regions along the path of apparent motion, suggesting that some properties of dynamically occluded objects may also be neurally represented in early visual cortex. We applied functional magnetic resonance imaging in human subjects to examine representations within visual cortex during dynamic occlusion. For gradually occluded, but not for instantly disappearing objects, there was an increase in activity in early visual cortex (V1, V2, and V3). This activity was spatially-specific, corresponding to the occluded location in the visual field. However, the activity did not encode enough information about object identity to discriminate between different kinds of occluded objects (circles vs. stars) using MVPA. In contrast, object identity could be decoded in spatially-specific subregions of higher-order, topographically organized areas such as ventral, lateral, and temporal occipital areas (VO, LO, and TO) as well as the functionally defined LOC and hMT+. These results suggest that early visual cortex may only represent the dynamically occluded object's position or motion path, while later visual areas represent object-specific information.
Collapse
|
32
|
Behrendt F, de Lussanet MHE, Zentgraf K, Zschorlich VR. Motor-Evoked Potentials in the Lower Back Are Modulated by Visual Perception of Lifted Weight. PLoS One 2016; 11:e0157811. [PMID: 27336751 PMCID: PMC4919087 DOI: 10.1371/journal.pone.0157811] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2016] [Accepted: 06/06/2016] [Indexed: 12/04/2022] Open
Abstract
Facilitation of the primary motor cortex (M1) during the mere observation of an action is highly congruent with the observed action itself. This congruency comprises several features of the executed action such as somatotopy and temporal coding. Studies using reach-grasp-lift paradigms showed that the muscle-specific facilitation of the observer’s motor system reflects the degree of grip force exerted in an observed hand action. The weight judgment of a lifted object during action observation is an easy task which is the case for hand actions as well as for lifting boxes from the ground. Here we investigated whether the cortical representation in M1 for lumbar back muscles is modulated due to the observation of a whole-body lifting movement as it was shown for hand action. We used transcranial magnetic stimulation (TMS) to measure the corticospinal excitability of the m. erector spinae (ES) while subjects visually observed the recorded sequences of a person lifting boxes of different weights from the floor. Consistent with the results regarding hand action the present study reveals a differential modulation of corticospinal excitability despite the relatively small M1 representation of the back also for lifting actions that mainly involve the lower back musculature.
Collapse
Affiliation(s)
- Frank Behrendt
- University Children’s Hospital Basle, Basle, Switzerland
- Research Department, Reha Rheinfelden, Rheinfelden, Switzerland
- * E-mail: (FB); (KZ)
| | | | - Karen Zentgraf
- Institute of Sport and Exercise Sciences, University of Münster, Münster, Germany
- * E-mail: (FB); (KZ)
| | - Volker R. Zschorlich
- Institute of Sport Science, Department of Kinesiology, University of Rostock, Rostock, Germany
| |
Collapse
|
33
|
Shen L, Zhang M, Chen Q. The Poggendorff illusion driven by real and illusory contour: Behavioral and neural mechanisms. Neuropsychologia 2016; 85:24-34. [PMID: 26956926 DOI: 10.1016/j.neuropsychologia.2016.03.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2015] [Revised: 02/06/2016] [Accepted: 03/04/2016] [Indexed: 10/22/2022]
Abstract
The Poggendorff illusion refers to the phenomenon that the human brain misperceives a diagonal line as being apparently misaligned once the diagonal line is interrupted by two parallel edges, and the size of illusion is negatively correlated with the angle of interception of the oblique, i.e. the sharper the oblique angle, the larger the illusion. This optical illusion can be produced by both real and illusory contour. In this fMRI study, by parametrically varying the oblique angle, we investigated the shared and specific neural mechanisms underlying the Poggendorff illusion induced by real and illusory contour. At the behavioral level, not only the real but also the illusory contours were capable of inducing significant Poggendorff illusion. The size of illusion induced by the real contour, however, was larger than that induced by the illusory contour. At the neural level, real and illusory contours commonly activated more dorsal visual areas, and the real contours specifically activated more ventral visual areas. More importantly, examinations on the parametric modulation effects of the size of illusion revealed the specific neural mechanisms underlying the Poggendorff illusion induced by the real and the illusory contours, respectively. Left precentral gyrus and right middle occipital cortex were specifically involved in the Poggendorff illusion induced by the real contour. On the other hand, bilateral intraparietal sulcus (IPS) and right lateral occipital complex (LOC) were specifically involved in the Poggendorff illusion induced by the illusory contour. Functional implications of the above findings were further discussed.
Collapse
Affiliation(s)
- Lu Shen
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou 510631, China
| | - Ming Zhang
- School of Education, Soochow University, Suzhou 215123, China
| | - Qi Chen
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou 510631, China; Epilepsy Center, Guangdong 999 Brain Hospital, Guangzhou 510631, China; Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China.
| |
Collapse
|
34
|
The brain's predictive prowess revealed in primary visual cortex. Proc Natl Acad Sci U S A 2016; 113:1124-5. [PMID: 26772315 DOI: 10.1073/pnas.1523834113] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|