1
|
Nikbakht N. More Than the Sum of Its Parts: Visual-Tactile Integration in the Behaving Rat. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:37-58. [PMID: 38270852 DOI: 10.1007/978-981-99-7611-9_3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
We experience the world by constantly integrating cues from multiple modalities to form unified sensory percepts. Once familiar with multimodal properties of an object, we can recognize it regardless of the modality involved. In this chapter we will examine the case of a visual-tactile orientation categorization experiment in rats. We will explore the involvement of the cerebral cortex in recognizing objects through multiple sensory modalities. In the orientation categorization task, rats learned to examine and judge the orientation of a raised, black and white grating using touch, vision, or both. Their multisensory performance was better than the predictions of linear models for cue combination, indicating synergy between the two sensory channels. Neural recordings made from a candidate associative cortical area, the posterior parietal cortex (PPC), reflected the principal neuronal correlates of the behavioral results: PPC neurons encoded both graded information about the object and categorical information about the animal's decision. Intriguingly single neurons showed identical responses under each of the three modality conditions providing a substrate for a neural circuit in the cortex that is involved in modality-invariant processing of objects.
Collapse
Affiliation(s)
- Nader Nikbakht
- Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
2
|
Schnell AE, Leemans M, Vinken K, Op de Beeck H. A computationally informed comparison between the strategies of rodents and humans in visual object recognition. eLife 2023; 12:RP87719. [PMID: 38079481 PMCID: PMC10712954 DOI: 10.7554/elife.87719] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2023] Open
Abstract
Many species are able to recognize objects, but it has been proven difficult to pinpoint and compare how different species solve this task. Recent research suggested to combine computational and animal modelling in order to obtain a more systematic understanding of task complexity and compare strategies between species. In this study, we created a large multidimensional stimulus set and designed a visual discrimination task partially based upon modelling with a convolutional deep neural network (CNN). Experiments included rats (N = 11; 1115 daily sessions in total for all rats together) and humans (N = 45). Each species was able to master the task and generalize to a variety of new images. Nevertheless, rats and humans showed very little convergence in terms of which object pairs were associated with high and low performance, suggesting the use of different strategies. There was an interaction between species and whether stimulus pairs favoured early or late processing in a CNN. A direct comparison with CNN representations and visual feature analyses revealed that rat performance was best captured by late convolutional layers and partially by visual features such as brightness and pixel-level similarity, while human performance related more to the higher-up fully connected layers. These findings highlight the additional value of using a computational approach for the design of object recognition tasks. Overall, this computationally informed investigation of object recognition behaviour reveals a strong discrepancy in strategies between rodent and human vision.
Collapse
Affiliation(s)
| | - Maarten Leemans
- Department of Brain and Cognition & Leuven Brain InstituteLeuvenBelgium
| | - Kasper Vinken
- Department of Neurobiology, Harvard Medical SchoolBostonUnited States
| | - Hans Op de Beeck
- Department of Brain and Cognition & Leuven Brain InstituteLeuvenBelgium
| |
Collapse
|
3
|
Brucklacher M, Bohté SM, Mejias JF, Pennartz CMA. Local minimization of prediction errors drives learning of invariant object representations in a generative network model of visual perception. Front Comput Neurosci 2023; 17:1207361. [PMID: 37818157 PMCID: PMC10561268 DOI: 10.3389/fncom.2023.1207361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 08/31/2023] [Indexed: 10/12/2023] Open
Abstract
The ventral visual processing hierarchy of the cortex needs to fulfill at least two key functions: perceived objects must be mapped to high-level representations invariantly of the precise viewing conditions, and a generative model must be learned that allows, for instance, to fill in occluded information guided by visual experience. Here, we show how a multilayered predictive coding network can learn to recognize objects from the bottom up and to generate specific representations via a top-down pathway through a single learning rule: the local minimization of prediction errors. Trained on sequences of continuously transformed objects, neurons in the highest network area become tuned to object identity invariant of precise position, comparable to inferotemporal neurons in macaques. Drawing on this, the dynamic properties of invariant object representations reproduce experimentally observed hierarchies of timescales from low to high levels of the ventral processing stream. The predicted faster decorrelation of error-neuron activity compared to representation neurons is of relevance for the experimental search for neural correlates of prediction errors. Lastly, the generative capacity of the network is confirmed by reconstructing specific object images, robust to partial occlusion of the inputs. By learning invariance from temporal continuity within a generative model, the approach generalizes the predictive coding framework to dynamic inputs in a more biologically plausible way than self-supervised networks with non-local error-backpropagation. This was achieved simply by shifting the training paradigm to dynamic inputs, with little change in architecture and learning rule from static input-reconstructing Hebbian predictive coding networks.
Collapse
Affiliation(s)
- Matthias Brucklacher
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands
| | - Sander M. Bohté
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands
- Machine Learning Group, Centrum Wiskunde & Informatica, Amsterdam, Netherlands
| | - Jorge F. Mejias
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands
| | - Cyriel M. A. Pennartz
- Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
4
|
Diamond ME, Toso A. Tactile cognition in rodents. Neurosci Biobehav Rev 2023; 149:105161. [PMID: 37028580 DOI: 10.1016/j.neubiorev.2023.105161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 03/23/2023] [Accepted: 04/03/2023] [Indexed: 04/08/2023]
Abstract
Since the discovery 50 years ago of the precisely ordered representation of the whiskers in somatosensory cortex, the rodent tactile sensory system has been a fertile ground for the study of sensory processing. With the growing sophistication of touch-based behavioral paradigms, together with advances in neurophysiological methodology, a new approach is emerging. By posing increasingly complex perceptual and memory problems, in many cases analogous to human psychophysical tasks, investigators now explore the operations underlying rodent problem solving. We define the neural basis of tactile cognition as the transformation from a stage in which neuronal activity encodes elemental features, local in space and in time, to a stage in which neuronal activity is an explicit representation of the behavioral operations underlying the current task. Selecting a set of whisker-based behavioral tasks, we show that rodents achieve high level performance through the workings of neuronal circuits that are accessible, decodable, and manipulatable. As a means towards exploring tactile cognition, this review presents leading psychophysical paradigms and, where known, their neural correlates.
Collapse
Affiliation(s)
- Mathew E Diamond
- Cognitive Neuroscience, International School for Advanced Studies, Via Bonomea 265, 34136 Trieste, Italy.
| | - Alessandro Toso
- Cognitive Neuroscience, International School for Advanced Studies, Via Bonomea 265, 34136 Trieste, Italy
| |
Collapse
|
5
|
Cohen SJ, Cinalli DA, Ásgeirsdóttir HN, Hindman B, Barenholtz E, Stackman RW. Mice recognize 3D objects from recalled 2D pictures, support for picture-object equivalence. Sci Rep 2022; 12:4184. [PMID: 35264621 PMCID: PMC8907285 DOI: 10.1038/s41598-022-07782-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 02/23/2022] [Indexed: 11/24/2022] Open
Abstract
Picture-object equivalence or recognizing a three-dimensional (3D) object after viewing a two-dimensional (2D) photograph of that object, is a higher-order form of visual cognition that may be beyond the perceptual ability of rodents. Behavioral and neurobiological mechanisms supporting picture-object equivalence are not well understood. We used a modified visual recognition memory task, reminiscent of those used for primates, to test whether picture-object equivalence extends to mice. Mice explored photographs of an object during a sample session, and 24 h later were presented with the actual 3D object from the photograph and a novel 3D object, or the stimuli were once again presented in 2D form. Mice preferentially explored the novel stimulus, indicating recognition of the “familiar” stimulus, regardless of whether the sample photographs depicted radially symmetric or asymmetric, similar, rotated, or abstract objects. Discrimination did not appear to be guided by individual object features or low-level visual stimuli. Inhibition of CA1 neuronal activity in dorsal hippocampus impaired discrimination, reflecting impaired memory of the 2D sample object. Collectively, results from a series of experiments provide strong evidence that picture-object equivalence extends to mice and is hippocampus-dependent, offering important support for the appropriateness of mice for investigating mechanisms of human cognition.
Collapse
Affiliation(s)
- Sarah J Cohen
- Center for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, FL, 33431, USA.,Jupiter Life Science Initiative, Florida Atlantic University, John D. MacArthur Campus, Jupiter, FL, 33458, USA
| | - David A Cinalli
- Department of Psychology, Charles E. Schmidt College of Science, Florida Atlantic University, Boca Raton, FL, 33431, USA
| | - Herborg N Ásgeirsdóttir
- Jupiter Life Science Initiative, Florida Atlantic University, John D. MacArthur Campus, Jupiter, FL, 33458, USA.,FAU and Max Planck Florida Institute Joint Integrative Biology - Neuroscience Graduate Program, Florida Atlantic University, Jupiter, FL, 33458, USA
| | - Brandon Hindman
- Department of Psychology, Charles E. Schmidt College of Science, Florida Atlantic University, Boca Raton, FL, 33431, USA
| | - Elan Barenholtz
- Center for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, FL, 33431, USA.,Department of Psychology, Charles E. Schmidt College of Science, Florida Atlantic University, Boca Raton, FL, 33431, USA
| | - Robert W Stackman
- Center for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, FL, 33431, USA. .,Jupiter Life Science Initiative, Florida Atlantic University, John D. MacArthur Campus, Jupiter, FL, 33458, USA. .,Department of Psychology, Charles E. Schmidt College of Science, Florida Atlantic University, Boca Raton, FL, 33431, USA. .,FAU and Max Planck Florida Institute Joint Integrative Biology - Neuroscience Graduate Program, Florida Atlantic University, Jupiter, FL, 33458, USA.
| |
Collapse
|
6
|
Matteucci G, Zattera B, Bellacosa Marotti R, Zoccolan D. Rats spontaneously perceive global motion direction of drifting plaids. PLoS Comput Biol 2021; 17:e1009415. [PMID: 34520476 PMCID: PMC8462730 DOI: 10.1371/journal.pcbi.1009415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 09/24/2021] [Accepted: 09/01/2021] [Indexed: 11/19/2022] Open
Abstract
Computing global motion direction of extended visual objects is a hallmark of primate high-level vision. Although neurons selective for global motion have also been found in mouse visual cortex, it remains unknown whether rodents can combine multiple motion signals into global, integrated percepts. To address this question, we trained two groups of rats to discriminate either gratings (G group) or plaids (i.e., superpositions of gratings with different orientations; P group) drifting horizontally along opposite directions. After the animals learned the task, we applied a visual priming paradigm, where presentation of the target stimulus was preceded by the brief presentation of either a grating or a plaid. The extent to which rat responses to the targets were biased by such prime stimuli provided a measure of the spontaneous, perceived similarity between primes and targets. We found that gratings and plaids, when used as primes, were equally effective at biasing the perception of plaid direction for the rats of the P group. Conversely, for the G group, only the gratings acted as effective prime stimuli, while the plaids failed to alter the perception of grating direction. To interpret these observations, we simulated a decision neuron reading out the representations of gratings and plaids, as conveyed by populations of either component or pattern cells (i.e., local or global motion detectors). We concluded that the findings for the P group are highly consistent with the existence of a population of pattern cells, playing a functional role similar to that demonstrated in primates. We also explored different scenarios that could explain the failure of the plaid stimuli to elicit a sizable priming magnitude for the G group. These simulations yielded testable predictions about the properties of motion representations in rodent visual cortex at the single-cell and circuitry level, thus paving the way to future neurophysiology experiments.
Collapse
Affiliation(s)
- Giulio Matteucci
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Trieste, Italy
| | - Benedetta Zattera
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Trieste, Italy
| | | | - Davide Zoccolan
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Trieste, Italy
- * E-mail:
| |
Collapse
|
7
|
Vinken K, Op de Beeck H. Using deep neural networks to evaluate object vision tasks in rats. PLoS Comput Biol 2021; 17:e1008714. [PMID: 33651793 PMCID: PMC7954349 DOI: 10.1371/journal.pcbi.1008714] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Revised: 03/12/2021] [Accepted: 01/17/2021] [Indexed: 11/18/2022] Open
Abstract
In the last two decades rodents have been on the rise as a dominant model for visual neuroscience. This is particularly true for earlier levels of information processing, but a number of studies have suggested that also higher levels of processing such as invariant object recognition occur in rodents. Here we provide a quantitative and comprehensive assessment of this claim by comparing a wide range of rodent behavioral and neural data with convolutional deep neural networks. These networks have been shown to capture hallmark properties of information processing in primates through a succession of convolutional and fully connected layers. We find that performance on rodent object vision tasks can be captured using low to mid-level convolutional layers only, without any convincing evidence for the need of higher layers known to simulate complex object recognition in primates. Our approach also reveals surprising insights on assumptions made before, for example, that the best performing animals would be the ones using the most abstract representations-which we show to likely be incorrect. Our findings suggest a road ahead for further studies aiming at quantifying and establishing the richness of representations underlying information processing in animal models at large.
Collapse
Affiliation(s)
- Kasper Vinken
- Department of Ophthalmology, Children’s Hospital, Harvard Medical School, Boston, Massachusetts, United States of America
- Laboratory for Neuro- and Psychophysiology, KU Leuven, Leuven, Belgium
| | - Hans Op de Beeck
- Department of Brain and Cognition & Leuven Brain Institute, KU Leuven, Leuven, Belgium
| |
Collapse
|
8
|
Broschard MB, Kim J, Love BC, Freeman JH. Category learning in rodents using touchscreen‐based tasks. GENES BRAIN AND BEHAVIOR 2020; 20:e12665. [DOI: 10.1111/gbb.12665] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 05/01/2020] [Accepted: 05/04/2020] [Indexed: 01/29/2023]
Affiliation(s)
- Matthew B. Broschard
- Department of Psychological and Brain Sciences University of Iowa Iowa City Iowa USA
| | - Jangjin Kim
- Department of Psychological and Brain Sciences University of Iowa Iowa City Iowa USA
| | - Bradley C. Love
- Department of Experimental Psychology and The Alan Turing Institute University College London London UK
| | - John H. Freeman
- Department of Psychological and Brain Sciences University of Iowa Iowa City Iowa USA
| |
Collapse
|
9
|
Schnell AE, Van den Bergh G, Vermaercke B, Gijbels K, Bossens C, de Beeck HO. Face categorization and behavioral templates in rats. J Vis 2019; 19:9. [PMID: 31826254 DOI: 10.1167/19.14.9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Rodents have become a popular model in vision science. It is still unclear how vision in rodents relates to primate vision when it comes to complex visual tasks. Here we report on the results of training rats in a face-categorization and generalization task. Additionally, the Bubbles paradigm is used to determine the behavioral templates of the animals. We found that rats are capable of face categorization and can generalize to previously unseen exemplars. Performance is affected-but remains above chance-by stimulus modifications such as upside-down and contrast-inverted stimuli. The behavioral templates of the rats overlap with a pixel-based template, with a bias toward the upper left parts of the stimuli. Together, these findings significantly expand the evidence about the extent to which rats learn complex visual-categorization tasks.
Collapse
Affiliation(s)
- Anna Elisabeth Schnell
- Laboratory of Biological Psychology, University of Leuven (KU Leuven), Leuven, Belgium.,Leuven Brain Institute, University of Leuven (KU Leuven), Leuven, Belgium
| | - Gert Van den Bergh
- Laboratory of Biological Psychology, University of Leuven (KU Leuven), Leuven, Belgium.,Leuven Brain Institute, University of Leuven (KU Leuven), Leuven, Belgium
| | - Ben Vermaercke
- Laboratory of Biological Psychology, University of Leuven (KU Leuven), Leuven, Belgium.,Leuven Brain Institute, University of Leuven (KU Leuven), Leuven, Belgium
| | - Kim Gijbels
- Laboratory of Biological Psychology, University of Leuven (KU Leuven), Leuven, Belgium.,Leuven Brain Institute, University of Leuven (KU Leuven), Leuven, Belgium
| | - Christophe Bossens
- Laboratory of Biological Psychology, University of Leuven (KU Leuven), Leuven, Belgium.,Leuven Brain Institute, University of Leuven (KU Leuven), Leuven, Belgium
| | - Hans Op de Beeck
- Laboratory of Biological Psychology, University of Leuven (KU Leuven), Leuven, Belgium.,Leuven Brain Institute, University of Leuven (KU Leuven), Leuven, Belgium
| |
Collapse
|
10
|
Vanzella W, Grion N, Bertolini D, Perissinotto A, Gigante M, Zoccolan D. A passive, camera-based head-tracking system for real-time, three-dimensional estimation of head position and orientation in rodents. J Neurophysiol 2019; 122:2220-2242. [PMID: 31553687 DOI: 10.1152/jn.00301.2019] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Tracking head position and orientation in small mammals is crucial for many applications in the field of behavioral neurophysiology, from the study of spatial navigation to the investigation of active sensing and perceptual representations. Many approaches to head tracking exist, but most of them only estimate the 2D coordinates of the head over the plane where the animal navigates. Full reconstruction of the pose of the head in 3D is much more more challenging and has been achieved only in handful of studies, which employed headsets made of multiple LEDs or inertial units. However, these assemblies are rather bulky and need to be powered to operate, which prevents their application in wireless experiments and in the small enclosures often used in perceptual studies. Here we propose an alternative approach, based on passively imaging a lightweight, compact, 3D structure, painted with a pattern of black dots over a white background. By applying a cascade of feature extraction algorithms that progressively refine the detection of the dots and reconstruct their geometry, we developed a tracking method that is highly precise and accurate, as assessed through a battery of validation measurements. We show that this method can be used to study how a rat samples sensory stimuli during a perceptual discrimination task and how a hippocampal place cell represents head position over extremely small spatial scales. Given its minimal encumbrance and wireless nature, our method could be ideal for high-throughput applications, where tens of animals need to be simultaneously and continuously tracked.NEW & NOTEWORTHY Head tracking is crucial in many behavioral neurophysiology studies. Yet reconstruction of the head's pose in 3D is challenging and typically requires implanting bulky, electrically powered headsets that prevent wireless experiments and are hard to employ in operant boxes. Here we propose an alternative approach, based on passively imaging a compact, 3D dot pattern that, once implanted over the head of a rodent, allows estimating the pose of its head with high precision and accuracy.
Collapse
Affiliation(s)
- Walter Vanzella
- Visual Neuroscience Laboratory, International School for Advanced Studies (SISSA), Trieste, Italy.,Glance Vision Technologies, Trieste, Italy
| | - Natalia Grion
- Visual Neuroscience Laboratory, International School for Advanced Studies (SISSA), Trieste, Italy
| | - Daniele Bertolini
- Visual Neuroscience Laboratory, International School for Advanced Studies (SISSA), Trieste, Italy
| | - Andrea Perissinotto
- Visual Neuroscience Laboratory, International School for Advanced Studies (SISSA), Trieste, Italy.,Glance Vision Technologies, Trieste, Italy
| | - Marco Gigante
- Mechatronics Lab, International School for Advanced Studies (SISSA), Trieste, Italy
| | - Davide Zoccolan
- Visual Neuroscience Laboratory, International School for Advanced Studies (SISSA), Trieste, Italy
| |
Collapse
|
11
|
Dell KL, Arabzadeh E, Price NSC. Differences in perceptual masking between humans and rats. Brain Behav 2019; 9:e01368. [PMID: 31444998 PMCID: PMC6749492 DOI: 10.1002/brb3.1368] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Revised: 06/24/2019] [Accepted: 06/29/2019] [Indexed: 12/29/2022] Open
Abstract
INTRODUCTION The perception of a target stimulus can be impaired by a subsequent mask stimulus, even if they do not overlap temporally or spatially. This "backward masking" is commonly used to modulate a subject's awareness of a target and to characterize the temporal dynamics of vision. Masking is most apparent with brief, low-contrast targets, making detection difficult even in the absence of a mask. Although necessary to investigate the underlying neural mechanisms, evaluating masking phenomena in animal models is particularly challenging, as the task structure and critical stimulus features to be attended must be learned incrementally through rewards and feedback. Despite the increasing popularity of rodents in vision research, it is unclear if they are susceptible to masking illusions. METHODS We characterized how spatially surrounding masks affected the detection of sine-wave grating targets. RESULTS In humans (n = 5) and rats (n = 7), target detection improved with contrast and was reduced by the presence of a mask. After controlling for biases to respond induced by the presence of the mask, a clear reduction in detectability was caused by masks. This reduction was evident when data were averaged across all animals, but was only individually significant in three animals. CONCLUSIONS While perceptual masking occurs in rats, it may be difficult to observe consistently in individual animals because the complexity of the requisite task pushes the limits of their behavioral capabilities. We suggest methods to ensure that masking, and similarly subtle effects, can be reliably characterized in future experiments.
Collapse
Affiliation(s)
- Katrina L Dell
- Neuroscience Program, Biomedicine Discovery Institute, Monash University, Clayton, Vic., Australia.,Department of Physiology, Monash University, Clayton, Vic., Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, Vic., Australia.,Department of Medicine, St. Vincent's Hospital, The University of Melbourne, Fitzroy, Vic., Australia
| | - Ehsan Arabzadeh
- John Curtin School of Medical Research, Eccles Institute of Neuroscience, The Australian National University, Canberra, ACT, Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, The Australian National University Node, Canberra, ACT, Australia
| | - Nicholas S C Price
- Neuroscience Program, Biomedicine Discovery Institute, Monash University, Clayton, Vic., Australia.,Department of Physiology, Monash University, Clayton, Vic., Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, Vic., Australia
| |
Collapse
|
12
|
Abstract
Observers perceive objects in the world as stable over space and time, even though the visual experience of those objects is often discontinuous and distorted due to masking, occlusion, camouflage, or noise. How are we able to easily and quickly achieve stable perception in spite of this constantly changing visual input? It was previously shown that observers experience serial dependence in the perception of features and objects, an effect that extends up to 15 seconds back in time. Here, we asked whether the visual system utilizes an object's prior physical location to inform future position assignments in order to maximize location stability of an object over time. To test this, we presented subjects with small targets at random angular locations relative to central fixation in the peripheral visual field. Subjects reported the perceived location of the target on each trial by adjusting a cursor's position to match its location. Subjects made consistent errors when reporting the perceived position of the target on the current trial, mislocalizing it toward the position of the target in the preceding two trials (Experiment 1). This pull in position perception occurred even when a response was not required on the previous trial (Experiment 2). In addition, we show that serial dependence in perceived position occurs immediately after stimulus presentation, and it is a fast stabilization mechanism that does not require a delay (Experiment 3). This indicates that serial dependence occurs for position representations and facilitates the stable perception of objects in space. Taken together with previous work, our results show that serial dependence occurs at many stages of visual processing, from initial position assignment to object categorization.
Collapse
|
13
|
Nonlinear Processing of Shape Information in Rat Lateral Extrastriate Cortex. J Neurosci 2019; 39:1649-1670. [PMID: 30617210 DOI: 10.1523/jneurosci.1938-18.2018] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Revised: 11/28/2018] [Accepted: 12/02/2018] [Indexed: 11/21/2022] Open
Abstract
In rodents, the progression of extrastriate areas located laterally to primary visual cortex (V1) has been assigned to a putative object-processing pathway (homologous to the primate ventral stream), based on anatomical considerations. Recently, we found functional support for such attribution (Tafazoli et al., 2017), by showing that this cortical progression is specialized for coding object identity despite view changes, the hallmark property of a ventral-like pathway. Here, we sought to clarify what computations are at the base of such specialization. To this aim, we performed multielectrode recordings from V1 and laterolateral area LL (at the apex of the putative ventral-like hierarchy) of male adult rats, during the presentation of drifting gratings and noise movies. We found that the extent to which neuronal responses were entrained to the phase of the gratings sharply dropped from V1 to LL, along with the quality of the receptive fields inferred through reverse correlation. Concomitantly, the tendency of neurons to respond to different oriented gratings increased, whereas the sharpness of orientation tuning declined. Critically, these trends are consistent with the nonlinear summation of visual inputs that is expected to take place along the ventral stream, according to the predictions of hierarchical models of ventral computations and a meta-analysis of the monkey literature. This suggests an intriguing homology between the mechanisms responsible for building up shape selectivity and transformation tolerance in the visual cortex of primates and rodents, reasserting the potential of the latter as models to investigate ventral stream functions at the circuitry level.SIGNIFICANCE STATEMENT Despite the growing popularity of rodents as models of visual functions, it remains unclear whether their visual cortex contains specialized modules for processing shape information. To addresses this question, we compared how neuronal tuning evolves from rat primary visual cortex (V1) to a downstream visual cortical region (area LL) that previous work has implicated in shape processing. In our experiments, LL neurons displayed a stronger tendency to respond to drifting gratings with different orientations while maintaining a sustained response across the whole duration of the drift cycle. These trends match the increased complexity of pattern selectivity and the augmented tolerance to stimulus translation found in monkey visual temporal cortex, thus revealing a homology between shape processing in rodents and primates.
Collapse
|
14
|
Dell KL, Arabzadeh E, Price NSC. Human-like perceptual masking is difficult to observe in rats performing an orientation discrimination task. PLoS One 2018; 13:e0207179. [PMID: 30462681 PMCID: PMC6248968 DOI: 10.1371/journal.pone.0207179] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Accepted: 10/26/2018] [Indexed: 11/19/2022] Open
Abstract
Visual masking occurs when the perception of a brief target stimulus is affected by a preceding or succeeding mask. The uncoupling of the target and its perception allows an opportunity to investigate the neuronal mechanisms involved in sensory representation and visual perception. To determine whether rats are a suitable model for subsequent studies of the neuronal basis of visual masking, we first demonstrated that decoding of neuronal responses recorded in the primary visual cortex (V1) of anaesthetized rats predicted that orientation discrimination performance should decline when masking stimuli are presented immediately before or after oriented target stimuli. We then trained Long-Evans rats (n = 7) to discriminate between horizontal and vertical target Gabors or gratings. In some trials, a plaid mask was presented at varying stimulus onset asynchronies (SOAs) relative to the target. Spatially, the masks were presented either overlapping or surrounding the target location. In the absence of a mask, all animals could reliably discriminate orientation when stimulus durations were 16 ms or longer. In the presence of a mask, discrimination performance was impaired, but did not systematically vary with SOA as is typical of visual masking. In humans performing a similar task, we found visual masking impaired perception of the target at short SOAs regardless of the spatial or temporal configuration of stimuli. Our findings indicate that visual masking may be difficult to observe in rats as the stimulus parameters necessary to quantify masking will make the task so difficult that it prevents robust measurement of psychophysical performance. Thus, our results suggest that rats may not be an ideal model to investigate the effects of visual masking on perception.
Collapse
Affiliation(s)
- Katrina Louise Dell
- Neuroscience Program, Biomedicine Discovery Institute, Monash University, Clayton, VIC, Australia
- Department of Physiology, Monash University, Clayton, VIC, Australia
- Australian Research Council Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
- Department of Medicine, The University of Melbourne, St. Vincent’s Hospital, Fitzroy VIC, Australia
| | - Ehsan Arabzadeh
- Eccles Institute of Neuroscience, John Curtin School of Medical Research, The Australian National University, Canberra, ACT, Australia
- Australian Research Council Centre of Excellence for Integrative Brain Function, The Australian National University Node, Canberra, ACT, Australia
| | - Nicholas Seow Chiang Price
- Neuroscience Program, Biomedicine Discovery Institute, Monash University, Clayton, VIC, Australia
- Department of Physiology, Monash University, Clayton, VIC, Australia
- Australian Research Council Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| |
Collapse
|
15
|
Newport C, Wallis G, Siebeck UE. Object recognition in fish: accurate discrimination across novel views of an unfamiliar object category (human faces). Anim Behav 2018. [DOI: 10.1016/j.anbehav.2018.09.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
16
|
Abstract
The visual cortex of mice is a useful model for investigating the mammalian visual system. In primates, higher visual areas are classified into two parts, the dorsal stream (“where” pathway) and ventral stream (“what” pathway). The ventral stream is known to include a part of the temporal cortex. In mice, however, some cortical areas adjacent to the primary visual area (V1) in the occipital cortex are thought to be comparable to the ventral stream in primates, although the whole picture of the mouse ventral stream has never been elucidated. We performed wide-field Ca2+ imaging in awake mice to investigate visual responses in the mouse temporal cortex, and found that the postrhinal cortex (POR), posterior to the auditory cortex (AC), and the ectorhinal and temporal association cortices (ECT), ventral to the AC, showed clear visual responses to moving visual objects. The retinotopic maps in the POR and ECT were not clearly observed, and the amplitudes of the visual responses in the POR and ECT were less sensitive to the size of the objects, compared to visual responses in the V1. In the ECT, objects of different sizes activated different subareas. These findings strongly suggest that the mouse ventral stream extends to the ECT ventral to the AC, and that it has characteristic response properties that are markedly different from the response properties in the V1.
Collapse
|
17
|
Kaliukhovich DA, Op de Beeck H. Hierarchical stimulus processing in rodent primary and lateral visual cortex as assessed through neuronal selectivity and repetition suppression. J Neurophysiol 2018; 120:926-941. [PMID: 29742022 DOI: 10.1152/jn.00673.2017] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Similar to primates, visual cortex in rodents appears to be organized in two distinct hierarchical streams. However, there is still little known about how visual information is processed along those streams in rodents. In this study, we examined how repetition suppression and position and clutter tolerance of the neuronal representations evolve along the putative ventral visual stream in rats. To address this question, we recorded multiunit spiking activity in primary visual cortex (V1) and the more downstream visual laterointermediate (LI) area of head-restrained Long-Evans rats. We employed a paradigm reminiscent of the continuous carry-over design used in human neuroimaging. In both areas, stimulus repetition attenuated the early phase of the neuronal response to the repeated stimulus, with this response suppression being greater in area LI. Furthermore, stimulus preferences were more similar across positions (position tolerance) in area LI than in V1, even though the absolute responses in both areas were very sensitive to changes in position. In contrast, the neuronal representations in both areas were equally good at tolerating the presence of limited visual clutter, as modeled by the presentation of a single flank stimulus. When probing tolerance of the neuronal representations with stimulus-specific adaptation, we detected no position tolerance in either examined brain area, whereas, on the contrary, we revealed clutter tolerance in both areas. Overall, our data demonstrate similarities and discrepancies in processing of visual information along the ventral visual stream of rodents and primates. Moreover, our results stress caution in using neuronal adaptation to probe tolerance of the neuronal representations. NEW & NOTEWORTHY Rodents are emerging as a popular animal model that complement primates for studying higher level visual functions. Similar to findings in primates, we demonstrate a greater repetition suppression and position tolerance of the neuronal representations in the downstream laterointermediate area of Long-Evans rats compared with primary visual cortex. However, we report no difference in the degree of clutter tolerance between the areas. These findings provide additional evidence for hierarchical processing of visual stimuli in rodents.
Collapse
Affiliation(s)
- Dzmitry A Kaliukhovich
- Laboratory of Biological Psychology, University of Leuven (KU Leuven) , Leuven , Belgium
| | - Hans Op de Beeck
- Laboratory of Biological Psychology, University of Leuven (KU Leuven) , Leuven , Belgium
| |
Collapse
|
18
|
Accuracy of Rats in Discriminating Visual Objects Is Explained by the Complexity of Their Perceptual Strategy. Curr Biol 2018; 28:1005-1015.e5. [PMID: 29551414 PMCID: PMC5887110 DOI: 10.1016/j.cub.2018.02.037] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Revised: 01/17/2018] [Accepted: 02/15/2018] [Indexed: 11/20/2022]
Abstract
Despite their growing popularity as models of visual functions, it remains unclear whether rodents are capable of deploying advanced shape-processing strategies when engaged in visual object recognition. In rats, for instance, pattern vision has been reported to range from mere detection of overall object luminance to view-invariant processing of discriminative shape features. Here we sought to clarify how refined object vision is in rodents, and how variable the complexity of their visual processing strategy is across individuals. To this aim, we measured how well rats could discriminate a reference object from 11 distractors, which spanned a spectrum of image-level similarity to the reference. We also presented the animals with random variations of the reference, and processed their responses to these stimuli to derive subject-specific models of rat perceptual choices. Our models successfully captured the highly variable discrimination performance observed across subjects and object conditions. In particular, they revealed that the animals that succeeded with the most challenging distractors were those that integrated the wider variety of discriminative features into their perceptual strategies. Critically, these strategies were largely preserved when the rats were required to discriminate outlined and scaled versions of the stimuli, thus showing that rat object vision can be characterized as a transformation-tolerant, feature-based filtering process. Overall, these findings indicate that rats are capable of advanced processing of shape information, and point to the rodents as powerful models for investigating the neuronal underpinnings of visual object recognition and other high-level visual functions. The ability of rats to discriminate visual objects varies greatly across subjects Such variability is accounted for by the diversity of rat perceptual strategies Animals building richer perceptual templates achieve higher accuracy Perceptual strategies remain largely invariant across object transformations
Collapse
|
19
|
Mitchnick KA, Wideman CE, Huff AE, Palmer D, McNaughton BL, Winters BD. Development of novel tasks for studying view-invariant object recognition in rodents: Sensitivity to scopolamine. Behav Brain Res 2018; 344:48-56. [PMID: 29412155 DOI: 10.1016/j.bbr.2018.01.030] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2017] [Revised: 01/19/2018] [Accepted: 01/22/2018] [Indexed: 11/16/2022]
Abstract
The capacity to recognize objects from different view-points or angles, referred to as view-invariance, is an essential process that humans engage in daily. Currently, the ability to investigate the neurobiological underpinnings of this phenomenon is limited, as few ethologically valid view-invariant object recognition tasks exist for rodents. Here, we report two complementary, novel view-invariant object recognition tasks in which rodents physically interact with three-dimensional objects. Prior to experimentation, rats and mice were given extensive experience with a set of 'pre-exposure' objects. In a variant of the spontaneous object recognition task, novelty preference for pre-exposed or new objects was assessed at various angles of rotation (45°, 90° or 180°); unlike control rodents, for whom the objects were novel, rats and mice tested with pre-exposed objects did not discriminate between rotated and un-rotated objects in the choice phase, indicating substantial view-invariant object recognition. Secondly, using automated operant touchscreen chambers, rats were tested on pre-exposed or novel objects in a pairwise discrimination task, where the rewarded stimulus (S+) was rotated (180°) once rats had reached acquisition criterion; rats tested with pre-exposed objects re-acquired the pairwise discrimination following S+ rotation more effectively than those tested with new objects. Systemic scopolamine impaired performance on both tasks, suggesting involvement of acetylcholine at muscarinic receptors in view-invariant object processing. These tasks present novel means of studying the behavioral and neural bases of view-invariant object recognition in rodents.
Collapse
Affiliation(s)
- Krista A Mitchnick
- Department of Psychology, University of Guelph, Canada; Collaborative Neuroscience Program, University of Guelph, Canada.
| | - Cassidy E Wideman
- Department of Psychology, University of Guelph, Canada; Collaborative Neuroscience Program, University of Guelph, Canada
| | - Andrew E Huff
- Department of Psychology, University of Guelph, Canada
| | - Daniel Palmer
- Department of Psychology, University of Guelph, Canada; Collaborative Neuroscience Program, University of Guelph, Canada
| | - Bruce L McNaughton
- Department of Neuroscience, University of Lethbridge, Canada; Department of Neurobiology and Behavior, University of California Irvine, United States
| | - Boyer D Winters
- Department of Psychology, University of Guelph, Canada; Collaborative Neuroscience Program, University of Guelph, Canada
| |
Collapse
|
20
|
Nikbakht N, Tafreshiha A, Zoccolan D, Diamond ME. Supralinear and Supramodal Integration of Visual and Tactile Signals in Rats: Psychophysics and Neuronal Mechanisms. Neuron 2018; 97:626-639.e8. [PMID: 29395913 PMCID: PMC5814688 DOI: 10.1016/j.neuron.2018.01.003] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2017] [Revised: 11/24/2017] [Accepted: 12/31/2017] [Indexed: 11/30/2022]
Abstract
To better understand how object recognition can be triggered independently of the sensory channel through which information is acquired, we devised a task in which rats judged the orientation of a raised, black and white grating. They learned to recognize two categories of orientation: 0° ± 45° (“horizontal”) and 90° ± 45° (“vertical”). Each trial required a visual (V), a tactile (T), or a visual-tactile (VT) discrimination; VT performance was better than that predicted by optimal linear combination of V and T signals, indicating synergy between sensory channels. We examined posterior parietal cortex (PPC) and uncovered key neuronal correlates of the behavioral findings: PPC carried both graded information about object orientation and categorical information about the rat’s upcoming choice; single neurons exhibited identical responses under the three modality conditions. Finally, a linear classifier of neuronal population firing replicated the behavioral findings. Taken together, these findings suggest that PPC is involved in the supramodal processing of shape. Rats combine vision and touch to distinguish two grating orientation categories Performance with vision and touch together reveals synergy between the two channels Posterior parietal cortex (PPC) neuronal responses are invariant to modality PPC neurons carry information about object orientation and the rat’s categorization
Collapse
Affiliation(s)
- Nader Nikbakht
- Tactile Perception and Learning Lab, International School for Advanced Studies (SISSA), Via Bonomea 265, Trieste, TS 34136, Italy
| | - Azadeh Tafreshiha
- Tactile Perception and Learning Lab, International School for Advanced Studies (SISSA), Via Bonomea 265, Trieste, TS 34136, Italy
| | - Davide Zoccolan
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Via Bonomea 265, Trieste, TS 34136, Italy
| | - Mathew E Diamond
- Tactile Perception and Learning Lab, International School for Advanced Studies (SISSA), Via Bonomea 265, Trieste, TS 34136, Italy.
| |
Collapse
|
21
|
Kurylo DD, Yeturo S, Lanza J, Bukhari F. Lateral masking effects on contrast sensitivity in rats. Behav Brain Res 2017; 335:1-7. [PMID: 28789950 DOI: 10.1016/j.bbr.2017.07.046] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2017] [Revised: 07/11/2017] [Accepted: 07/29/2017] [Indexed: 10/19/2022]
Abstract
Changes in target visibility may be produced by additional stimulus elements at adjacent locations. Such contextual effects may reflect lateral interactions of stimulus representations in early cortical areas. It has been reported that the organization of orientation preference found in primates and cats visual cortex differs from that found in rodents, suggesting functional distinctions across species. In order to examine effects of lateral interactions at a perceptual level, contrast sensitivity in rats was measured for Gabor patches masked by two additional patches. Rats responded to target onset, and perceptual indices were based upon reaction time distributions across levels of luminance contrast. It was found that contrast sensitivity of targets without lateral masks corresponded to levels previously reported. For all measurements, the presence of sustained lateral masks systematically reduced sensitivity to targets, demonstrating interference by adjacent elements across levels of contrast. Effects of mask orientation or separation were not observed. These results may reflect reported non-systematic topography of orientation tuning across the cortex in rodents. Results suggest that intrinsic lateral connections in early processing areas play a minimal role in stimulus integration for rats.
Collapse
Affiliation(s)
- Daniel D Kurylo
- Department of Psychology, Brooklyn College CUNY, Brooklyn, NY, 11210, United States.
| | - Sowmya Yeturo
- Department of Psychology, Brooklyn College CUNY, Brooklyn, NY, 11210, United States
| | - Joseph Lanza
- Department of Psychology, Brooklyn College CUNY, Brooklyn, NY, 11210, United States
| | - Farhan Bukhari
- Department of Computer Science, The Graduate Center CUNY, New York, NY, 10016, United States
| |
Collapse
|
22
|
The perceived stability of scenes: serial dependence in ensemble representations. Sci Rep 2017; 7:1971. [PMID: 28512359 PMCID: PMC5434007 DOI: 10.1038/s41598-017-02201-5] [Citation(s) in RCA: 66] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2016] [Accepted: 04/05/2017] [Indexed: 11/30/2022] Open
Abstract
We are continuously surrounded by a noisy and ever-changing environment. Instead of analyzing all the elements in a scene, our visual system has the ability to compress an enormous amount of visual information into ensemble representations, such as perceiving a forest instead of every single tree. Still, it is unclear why such complex scenes appear to be the same from moment to moment despite fluctuations, noise, and discontinuities in retinal images. The general effects of change blindness are usually thought to stabilize scene perception, making us unaware of minor inconsistencies between scenes. Here, we propose an alternative, that stable scene perception is actively achieved by the visual system through global serial dependencies: the appearance of scene gist is sequentially dependent on the gist perceived in previous moments. To test this hypothesis, we used summary statistical information as a proxy for “gist” level, global information in a scene. We found evidence for serial dependence in summary statistical representations. Furthermore, we show that this kind of serial dependence occurs at the ensemble level, where local elements are already merged into global representations. Taken together, our results provide a mechanism through which serial dependence can promote the apparent consistency of scenes over time.
Collapse
|
23
|
Tafazoli S, Safaai H, De Franceschi G, Rosselli FB, Vanzella W, Riggi M, Buffolo F, Panzeri S, Zoccolan D. Emergence of transformation-tolerant representations of visual objects in rat lateral extrastriate cortex. eLife 2017; 6. [PMID: 28395730 PMCID: PMC5388540 DOI: 10.7554/elife.22794] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2016] [Accepted: 02/26/2017] [Indexed: 01/17/2023] Open
Abstract
Rodents are emerging as increasingly popular models of visual functions. Yet, evidence that rodent visual cortex is capable of advanced visual processing, such as object recognition, is limited. Here we investigate how neurons located along the progression of extrastriate areas that, in the rat brain, run laterally to primary visual cortex, encode object information. We found a progressive functional specialization of neural responses along these areas, with: (1) a sharp reduction of the amount of low-level, energy-related visual information encoded by neuronal firing; and (2) a substantial increase in the ability of both single neurons and neuronal populations to support discrimination of visual objects under identity-preserving transformations (e.g., position and size changes). These findings strongly argue for the existence of a rat object-processing pathway, and point to the rodents as promising models to dissect the neuronal circuitry underlying transformation-tolerant recognition of visual objects. DOI:http://dx.doi.org/10.7554/eLife.22794.001 Everyday, we see thousands of different objects with many different shapes, colors, sizes and textures. Even an individual object – for example, a face – can present us with a virtually infinite number of different images, depending on from where we view it. In spite of this extraordinary variability, our brain can recognize objects in a fraction of a second and without any apparent effort. Our closest relatives in the animal kingdom, the non-human primates, share our ability to effortlessly recognize objects. For many decades, they have served as invaluable models to investigate the circuits of neurons in the brain that underlie object recognition. In recent years, mice and rats have also emerged as useful models for studying some aspects of vision. However, it was not clear whether these rodents’ brains could also perform complex visual processes like recognizing objects. Tafazoli, Safaai et al. have now recorded the responses of visual neurons in rats to a set of objects, each presented across a range of positions, sizes, rotations and brightness levels. Applying computational and mathematical tools to these responses revealed that visual information progresses through a number of brain regions. The identity of the visual objects is gradually extracted as the information travels along this pathway, in a way that becomes more and more robust to changes in how the object appears. Overall, Tafazoli, Safaai et al. suggest that rodents share with primates some of the key computations that underlie the recognition of visual objects. Therefore, the powerful sets of experimental approaches that can be used to study rats and mice – for example, genetic and molecular tools – could now be used to study the circuits of neurons that enable object recognition. Gaining a better understanding of such circuits can, in turn, inspire the design of more powerful artificial vision systems and help to develop visual prosthetics. Achieving these goals will require further work to understand how different classes of neurons in different brain regions interact as rodents perform complex visual discrimination tasks. DOI:http://dx.doi.org/10.7554/eLife.22794.002
Collapse
Affiliation(s)
- Sina Tafazoli
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Trieste, Italy
| | - Houman Safaai
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Trieste, Italy.,Laboratory of Neural Computation, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy.,Department of Neurobiology, Harvard Medical School, Boston, United States
| | - Gioia De Franceschi
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Trieste, Italy
| | | | - Walter Vanzella
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Trieste, Italy
| | - Margherita Riggi
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Trieste, Italy
| | - Federica Buffolo
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Trieste, Italy
| | - Stefano Panzeri
- Laboratory of Neural Computation, Center for Neuroscience and Cognitive Systems @UniTn, Istituto Italiano di Tecnologia, Rovereto, Italy
| | - Davide Zoccolan
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Trieste, Italy
| |
Collapse
|
24
|
Mice Can Use Second-Order, Contrast-Modulated Stimuli to Guide Visual Perception. J Neurosci 2016; 36:4457-69. [PMID: 27098690 DOI: 10.1523/jneurosci.4595-15.2016] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2015] [Accepted: 02/23/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Visual processing along the primate ventral stream takes place in a hierarchy of areas, characterized by an increase in both complexity of neuronal preferences and invariance to changes of low-level stimulus attributes. A basic type of invariance is form-cue invariance, where neurons have similar preferences in response to first-order stimuli, defined by changes in luminance, and global features of second-order stimuli, defined by changes in texture or contrast. Whether in mice, a now popular model system for early visual processing, visual perception can be guided by second-order stimuli is currently unknown. Here, we probed mouse visual perception and neural responses in areas V1 and LM using various types of second-order, contrast-modulated gratings with static noise carriers. These gratings differ in their spatial frequency composition and thus in their ability to invoke first-order mechanisms exploiting local luminance features. We show that mice can transfer learning of a coarse orientation discrimination task involving first-order, luminance-modulated gratings to the contrast-modulated gratings, albeit with markedly reduced discrimination performance. Consistent with these behavioral results, we demonstrate that neurons in area V1 and LM are less responsive and less selective to contrast-modulated than to luminance-modulated gratings, but respond with broadly similar preferred orientations. We conclude that mice can, at least in a rudimentary form, use second-order stimuli to guide visual perception. SIGNIFICANCE STATEMENT To extract object boundaries in natural scenes, the primate visual system does not only rely on differences in local luminance but can also take into account differences in texture or contrast. Whether the mouse, which has a much simpler visual system, can use such second-order information to guide visual perception is unknown. Here we tested mouse perception of second-order, contrast-defined stimuli and measured their neural representations in two areas of visual cortex. We find that mice can use contrast-defined stimuli to guide visual perception, although behavioral performance and neural representations were less robust than for luminance-defined stimuli. These findings shed light on basic steps of feature extraction along the mouse visual cortical hierarchy, which may ultimately lead to object recognition.
Collapse
|
25
|
Vinken K, Van den Bergh G, Vermaercke B, Op de Beeck HP. Neural Representations of Natural and Scrambled Movies Progressively Change from Rat Striate to Temporal Cortex. Cereb Cortex 2016; 26:3310-22. [PMID: 27146315 PMCID: PMC4898680 DOI: 10.1093/cercor/bhw111] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Abstract
In recent years, the rodent has come forward as a candidate model for investigating higher level visual abilities such as object vision. This view has been backed up substantially by evidence from behavioral studies that show rats can be trained to express visual object recognition and categorization capabilities. However, almost no studies have investigated the functional properties of rodent extrastriate visual cortex using stimuli that target object vision, leaving a gap compared with the primate literature. Therefore, we recorded single-neuron responses along a proposed ventral pathway in rat visual cortex to investigate hallmarks of primate neural object representations such as preference for intact versus scrambled stimuli and category-selectivity. We presented natural movies containing a rat or no rat as well as their phase-scrambled versions. Population analyses showed increased dissociation in representations of natural versus scrambled stimuli along the targeted stream, but without a clear preference for natural stimuli. Along the measured cortical hierarchy the neural response seemed to be driven increasingly by features that are not V1-like and destroyed by phase-scrambling. However, there was no evidence for category selectivity for the rat versus nonrat distinction. Together, these findings provide insights about differences and commonalities between rodent and primate visual cortex.
Collapse
Affiliation(s)
- Kasper Vinken
- Laboratory of Biological Psychology Laboratory for Neuro- and Psychophysiology, KU Leuven, 3000 Leuven, Belgium
| | | | | | | |
Collapse
|
26
|
Wood JN, Wood SMW. The development of newborn object recognition in fast and slow visual worlds. Proc Biol Sci 2016; 283:20160166. [PMID: 27097925 PMCID: PMC4855384 DOI: 10.1098/rspb.2016.0166] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2016] [Accepted: 03/29/2016] [Indexed: 11/12/2022] Open
Abstract
Object recognition is central to perception and cognition. Yet relatively little is known about the environmental factors that cause invariant object recognition to emerge in the newborn brain. Is this ability a hardwired property of vision? Or does the development of invariant object recognition require experience with a particular kind of visual environment? Here, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) require visual experience with slowly changing objects to develop invariant object recognition abilities. When newborn chicks were raised with a slowly rotating virtual object, the chicks built invariant object representations that generalized across novel viewpoints and rotation speeds. In contrast, when newborn chicks were raised with a virtual object that rotated more quickly, the chicks built viewpoint-specific object representations that failed to generalize to novel viewpoints and rotation speeds. Moreover, there was a direct relationship between the speed of the object and the amount of invariance in the chick's object representation. Thus, visual experience with slowly changing objects plays a critical role in the development of invariant object recognition. These results indicate that invariant object recognition is not a hardwired property of vision, but is learned rapidly when newborns encounter a slowly changing visual world.
Collapse
Affiliation(s)
- Justin N Wood
- Department of Psychology, University of Southern California, Los Angeles, CA 90089, USA
| | - Samantha M W Wood
- Department of Psychology, University of Southern California, Los Angeles, CA 90089, USA
| |
Collapse
|
27
|
Reinagel P. Using rats for vision research. Neuroscience 2015; 296:75-9. [DOI: 10.1016/j.neuroscience.2014.12.025] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2014] [Revised: 12/10/2014] [Accepted: 12/13/2014] [Indexed: 11/16/2022]
|
28
|
Vermaercke B, Van den Bergh G, Gerich F, Op de Beeck H. Neural discriminability in rat lateral extrastriate cortex and deep but not superficial primary visual cortex correlates with shape discriminability. Front Neural Circuits 2015; 9:24. [PMID: 26041999 PMCID: PMC4438227 DOI: 10.3389/fncir.2015.00024] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2014] [Accepted: 05/07/2015] [Indexed: 11/15/2022] Open
Abstract
Recent studies have revealed a surprising degree of functional specialization in rodent visual cortex. It is unknown to what degree this functional organization is related to the well-known hierarchical organization of the visual system in primates. We designed a study in rats that targets one of the hallmarks of the hierarchical object vision pathway in primates: selectivity for behaviorally relevant dimensions. We compared behavioral performance in a visual water maze with neural discriminability in five visual cortical areas. We tested behavioral discrimination in two independent batches of six rats using six pairs of shapes used previously to probe shape selectivity in monkey cortex (Lehky and Sereno, 2007). The relative difficulty (error rate) of shape pairs was strongly correlated between the two batches, indicating that some shape pairs were more difficult to discriminate than others. Then, we recorded in naive rats from five visual areas from primary visual cortex (V1) over areas LM, LI, LL, up to lateral occipito-temporal cortex (TO). Shape selectivity in the upper layers of V1, where the information enters cortex, correlated mostly with physical stimulus dissimilarity and not with behavioral performance. In contrast, neural discriminability in lower layers of all areas was strongly correlated with behavioral performance. These findings, in combination with the results from Vermaercke et al. (2014b), suggest that the functional specialization in rodent lateral visual cortex reflects a processing hierarchy resulting in the emergence of complex selectivity that is related to behaviorally relevant stimulus differences.
Collapse
Affiliation(s)
- Ben Vermaercke
- Laboratory of Biological Psychology, Psychology and Educational Sciences, KU LeuvenLeuven, Belgium
- Department for Molecular and Cellular Biology, Center for Brain Science, Harvard UniversityCambridge, MA, USA
| | - Gert Van den Bergh
- Laboratory of Biological Psychology, Psychology and Educational Sciences, KU LeuvenLeuven, Belgium
| | - Florian Gerich
- Laboratory of Biological Psychology, Psychology and Educational Sciences, KU LeuvenLeuven, Belgium
| | - Hans Op de Beeck
- Laboratory of Biological Psychology, Psychology and Educational Sciences, KU LeuvenLeuven, Belgium
| |
Collapse
|
29
|
Rosselli FB, Alemi A, Ansuini A, Zoccolan D. Object similarity affects the perceptual strategy underlying invariant visual object recognition in rats. Front Neural Circuits 2015; 9:10. [PMID: 25814936 PMCID: PMC4357263 DOI: 10.3389/fncir.2015.00010] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2014] [Accepted: 02/23/2015] [Indexed: 12/04/2022] Open
Abstract
In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness). In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant) to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: (i) smaller and more scattered; (ii) only partially preserved across object views; and (iii) only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning.
Collapse
Affiliation(s)
- Federica B Rosselli
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA) Trieste, Italy
| | - Alireza Alemi
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA) Trieste, Italy ; Department of Applied Science and Technology, Center for Computational Sciences, Politecnico di Torino Torino, Italy ; Human Genetics Foundation Torino, Italy
| | - Alessio Ansuini
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA) Trieste, Italy
| | - Davide Zoccolan
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA) Trieste, Italy
| |
Collapse
|
30
|
Wood SMW, Wood JN. A chicken model for studying the emergence of invariant object recognition. Front Neural Circuits 2015; 9:7. [PMID: 25767436 PMCID: PMC4341568 DOI: 10.3389/fncir.2015.00007] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2014] [Accepted: 02/03/2015] [Indexed: 12/03/2022] Open
Abstract
“Invariant object recognition” refers to the ability to recognize objects across variation in their appearance on the retina. This ability is central to visual perception, yet its developmental origins are poorly understood. Traditionally, nonhuman primates, rats, and pigeons have been the most commonly used animal models for studying invariant object recognition. Although these animals have many advantages as model systems, they are not well suited for studying the emergence of invariant object recognition in the newborn brain. Here, we argue that newly hatched chicks (Gallus gallus) are an ideal model system for studying the emergence of invariant object recognition. Using an automated controlled-rearing approach, we show that chicks can build a viewpoint-invariant representation of the first object they see in their life. This invariant representation can be built from highly impoverished visual input (three images of an object separated by 15° azimuth rotations) and cannot be accounted for by low-level retina-like or V1-like neuronal representations. These results indicate that newborn neural circuits begin building invariant object representations at the onset of vision and argue for an increased focus on chicks as an animal model for studying invariant object recognition.
Collapse
Affiliation(s)
- Samantha M W Wood
- Department of Psychology, University of Southern California Los Angeles, CA, USA
| | - Justin N Wood
- Department of Psychology, University of Southern California Los Angeles, CA, USA
| |
Collapse
|
31
|
Zoccolan D. Invariant visual object recognition and shape processing in rats. Behav Brain Res 2015; 285:10-33. [PMID: 25561421 PMCID: PMC4383365 DOI: 10.1016/j.bbr.2014.12.053] [Citation(s) in RCA: 49] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2014] [Revised: 12/19/2014] [Accepted: 12/25/2014] [Indexed: 12/28/2022]
Abstract
Invariant visual object recognition is the ability to recognize visual objects despite the vastly different images that each object can project onto the retina during natural vision, depending on its position and size within the visual field, its orientation relative to the viewer, etc. Achieving invariant recognition represents such a formidable computational challenge that is often assumed to be a unique hallmark of primate vision. Historically, this has limited the invasive investigation of its neuronal underpinnings to monkey studies, in spite of the narrow range of experimental approaches that these animal models allow. Meanwhile, rodents have been largely neglected as models of object vision, because of the widespread belief that they are incapable of advanced visual processing. However, the powerful array of experimental tools that have been developed to dissect neuronal circuits in rodents has made these species very attractive to vision scientists too, promoting a new tide of studies that have started to systematically explore visual functions in rats and mice. Rats, in particular, have been the subjects of several behavioral studies, aimed at assessing how advanced object recognition and shape processing is in this species. Here, I review these recent investigations, as well as earlier studies of rat pattern vision, to provide an historical overview and a critical summary of the status of the knowledge about rat object vision. The picture emerging from this survey is very encouraging with regard to the possibility of using rats as complementary models to monkeys in the study of higher-level vision.
Collapse
Affiliation(s)
- Davide Zoccolan
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), 34136 Trieste, Italy.
| |
Collapse
|
32
|
Kurylo DD, Chung C, Yeturo S, Lanza J, Gorskaya A, Bukhari F. Effects of contrast, spatial frequency, and stimulus duration on reaction time in rats. Vision Res 2014; 106:20-6. [PMID: 25451244 DOI: 10.1016/j.visres.2014.10.031] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2014] [Revised: 10/03/2014] [Accepted: 10/08/2014] [Indexed: 10/24/2022]
Abstract
Early visual processing in rats is mediated by several pre-cortical pathways as well as multiple retinal ganglion cell types that vary in response characteristics. Discrete processing is thereby optimized for select ranges of stimulus parameters. In order to explore variation in response characteristics at a perceptual level, visual detection in rats was measured across a range of contrasts, spatial frequencies, and durations. Rats responded to the onset of Gabor patches. Onset time occurred after a random delay, and reaction time (RT) frequency distribution served to index target visibility. It was found that lower spatial frequency produced shorter RTs, as well as increased RT equivalent of contrast gain. Brief stimulus presentation reduced target visibility, slowed RTs, and reduced contrast gain at higher spatial frequencies. However, brief stimuli shortened RTs at low contrasts and low spatial frequencies, suggesting transient stimuli are more efficiently processed under these conditions. Collectively, perceptual characteristics appear to reflect distinctions in neural responses at early stages of processing. The RT characteristics found here may thereby reflect the contribution of multiple channels, and suggest a progressive shift in relative involvement across parameter levels.
Collapse
Affiliation(s)
- Daniel D Kurylo
- Department of Psychology, Brooklyn College CUNY, Brooklyn, NY 11210, United States.
| | - Caroline Chung
- Department of Psychology, Brooklyn College CUNY, Brooklyn, NY 11210, United States
| | - Sowmya Yeturo
- Department of Psychology, Brooklyn College CUNY, Brooklyn, NY 11210, United States
| | - Joseph Lanza
- Department of Psychology, Brooklyn College CUNY, Brooklyn, NY 11210, United States
| | - Arina Gorskaya
- Department of Psychology, Brooklyn College CUNY, Brooklyn, NY 11210, United States
| | - Farhan Bukhari
- Department of Computer Science, The Graduate Center CUNY, New York, NY 10016, United States
| |
Collapse
|
33
|
Abstract
Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats.
Collapse
|
34
|
Vermaercke B, Gerich FJ, Ytebrouck E, Arckens L, Op de Beeck HP, Van den Bergh G. Functional specialization in rat occipital and temporal visual cortex. J Neurophysiol 2014; 112:1963-83. [PMID: 24990566 DOI: 10.1152/jn.00737.2013] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Recent studies have revealed a surprising degree of functional specialization in rodent visual cortex. Anatomically, suggestions have been made about the existence of hierarchical pathways with similarities to the ventral and dorsal pathways in primates. Here we aimed to characterize some important functional properties in part of the supposed "ventral" pathway in rats. We investigated the functional properties along a progression of five visual areas in awake rats, from primary visual cortex (V1) over lateromedial (LM), latero-intermediate (LI), and laterolateral (LL) areas up to the newly found lateral occipito-temporal cortex (TO). Response latency increased >20 ms from areas V1/LM/LI to areas LL and TO. Orientation and direction selectivity for the used grating patterns increased gradually from V1 to TO. Overall responsiveness and selectivity to shape stimuli decreased from V1 to TO and was increasingly dependent upon shape motion. Neural similarity for shapes could be accounted for by a simple computational model in V1, but not in the other areas. Across areas, we find a gradual change in which stimulus pairs are most discriminable. Finally, tolerance to position changes increased toward TO. These findings provide unique information about possible commonalities and differences between rodents and primates in hierarchical cortical processing.
Collapse
Affiliation(s)
- Ben Vermaercke
- Laboratory of Biological Psychology, KU Leuven, Leuven, Belgium; and
| | - Florian J Gerich
- Laboratory of Biological Psychology, KU Leuven, Leuven, Belgium; and
| | - Ellen Ytebrouck
- Laboratory of Neuroplasticity and Neuroproteomics, KU Leuven, Leuven, Belgium
| | - Lutgarde Arckens
- Laboratory of Neuroplasticity and Neuroproteomics, KU Leuven, Leuven, Belgium
| | | | | |
Collapse
|
35
|
Cox DD. Do we understand high-level vision? Curr Opin Neurobiol 2014; 25:187-93. [PMID: 24552691 DOI: 10.1016/j.conb.2014.01.016] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2013] [Revised: 01/21/2014] [Accepted: 01/22/2014] [Indexed: 11/16/2022]
Abstract
'High-level' vision lacks a single, agreed upon definition, but it might usefully be defined as those stages of visual processing that transition from analyzing local image structure to analyzing structure of the external world that produced those images. Much work in the last several decades has focused on object recognition as a framing problem for the study of high-level visual cortex, and much progress has been made in this direction. This approach presumes that the operational goal of the visual system is to read-out the identity of an object (or objects) in a scene, in spite of variation in the position, size, lighting and the presence of other nearby objects. However, while object recognition as a operational framing of high-level is intuitive appealing, it is by no means the only task that visual cortex might do, and the study of object recognition is beset by challenges in building stimulus sets that adequately sample the infinite space of possible stimuli. Here I review the successes and limitations of this work, and ask whether we should reframe our approaches to understanding high-level vision.
Collapse
Affiliation(s)
- David Daniel Cox
- Department of Molecular and Cellular Biology, Center for Brain Science, School of Engineering and Applied Sciences, Harvard University 52 Oxford St., Room 219.40, Cambridge, MA 02138, United States.
| |
Collapse
|
36
|
Abstract
Primates can store sensory stimulus parameters in working memory for subsequent manipulation, but until now, there has been no demonstration of this capacity in rodents. Here we report tactile working memory in rats. Each stimulus is a vibration, generated as a series of velocity values sampled from a normal distribution. To perform the task, the rat positions its whiskers to receive two such stimuli, "base" and "comparison," separated by a variable delay. It then judges which stimulus had greater velocity SD. In analogous experiments, humans compare two vibratory stimuli on the fingertip. We demonstrate that the ability of rats to hold base stimulus information (for up to 8 s) and their acuity in assessing stimulus differences overlap the performance demonstrated by humans. This experiment highlights the ability of rats to perceive the statistical structure of vibrations and reveals their previously unknown capacity to store sensory information in working memory.
Collapse
|
37
|
Reinagel P. Speed and accuracy of visual image discrimination by rats. Front Neural Circuits 2013; 7:200. [PMID: 24385954 PMCID: PMC3866522 DOI: 10.3389/fncir.2013.00200] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2013] [Accepted: 12/02/2013] [Indexed: 11/23/2022] Open
Abstract
The trade-off between speed and accuracy of sensory discrimination has most often been studied using sensory stimuli that evolve over time, such as random dot motion discrimination tasks. We previously reported that when rats perform motion discrimination, correct trials have longer reaction times than errors, accuracy increases with reaction time, and reaction time increases with stimulus ambiguity. In such experiments, new sensory information is continually presented, which could partly explain interactions between reaction time and accuracy. The present study shows that a changing physical stimulus is not essential to those findings. Freely behaving rats were trained to discriminate between two static visual images in a self-paced, two-alternative forced-choice reaction time task. Each trial was initiated by the rat, and the two images were presented simultaneously and persisted until the rat responded, with no time limit. Reaction times were longer in correct trials than in error trials, and accuracy increased with reaction time, comparable to results previously reported for rats performing motion discrimination. In the motion task, coherence has been used to vary discrimination difficulty. Here morphs between the previously learned images were used to parametrically vary the image similarity. In randomly interleaved trials, rats took more time on average to respond in trials in which they had to discriminate more similar stimuli. For both the motion and image tasks, the dependence of reaction time on ambiguity is weak, as if rats prioritized speed over accuracy. Therefore we asked whether rats can change the priority of speed and accuracy adaptively in response to a change in reward contingencies. For two rats, the penalty delay was increased from 2 to 6 s. When the penalty was longer, reaction times increased, and accuracy improved. This demonstrates that rats can flexibly adjust their behavioral strategy in response to the cost of errors.
Collapse
Affiliation(s)
- Pamela Reinagel
- Section of Neurobiology, Division of Biological Sciences, University of California at San Diego La Jolla, CA, USA
| |
Collapse
|
38
|
Baldassi C, Alemi-Neissi A, Pagan M, DiCarlo JJ, Zecchina R, Zoccolan D. Shape similarity, better than semantic membership, accounts for the structure of visual object representations in a population of monkey inferotemporal neurons. PLoS Comput Biol 2013; 9:e1003167. [PMID: 23950700 PMCID: PMC3738466 DOI: 10.1371/journal.pcbi.1003167] [Citation(s) in RCA: 72] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2013] [Accepted: 06/19/2013] [Indexed: 12/02/2022] Open
Abstract
The anterior inferotemporal cortex (IT) is the highest stage along the hierarchy of visual areas that, in primates, processes visual objects. Although several lines of evidence suggest that IT primarily represents visual shape information, some recent studies have argued that neuronal ensembles in IT code the semantic membership of visual objects (i.e., represent conceptual classes such as animate and inanimate objects). In this study, we investigated to what extent semantic, rather than purely visual information, is represented in IT by performing a multivariate analysis of IT responses to a set of visual objects. By relying on a variety of machine-learning approaches (including a cutting-edge clustering algorithm that has been recently developed in the domain of statistical physics), we found that, in most instances, IT representation of visual objects is accounted for by their similarity at the level of shape or, more surprisingly, low-level visual properties. Only in a few cases we observed IT representations of semantic classes that were not explainable by the visual similarity of their members. Overall, these findings reassert the primary function of IT as a conveyor of explicit visual shape information, and reveal that low-level visual properties are represented in IT to a greater extent than previously appreciated. In addition, our work demonstrates how combining a variety of state-of-the-art multivariate approaches, and carefully estimating the contribution of shape similarity to the representation of object categories, can substantially advance our understanding of neuronal coding of visual objects in cortex.
Collapse
Affiliation(s)
- Carlo Baldassi
- Department of Applied Science and Technology & Center for Computational Sciences, Politecnico di Torino, Torino, Italy
- Human Genetics Foundation (HuGeF), Torino, Torino, Italy
| | - Alireza Alemi-Neissi
- Human Genetics Foundation (HuGeF), Torino, Torino, Italy
- International School for Advanced Studies (SISSA), Trieste, Italy
| | - Marino Pagan
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - James J. DiCarlo
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
| | - Riccardo Zecchina
- Department of Applied Science and Technology & Center for Computational Sciences, Politecnico di Torino, Torino, Italy
- Human Genetics Foundation (HuGeF), Torino, Torino, Italy
| | - Davide Zoccolan
- International School for Advanced Studies (SISSA), Trieste, Italy
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, United States of America
| |
Collapse
|
39
|
Carandini M, Churchland AK. Probing perceptual decisions in rodents. Nat Neurosci 2013; 16:824-31. [PMID: 23799475 PMCID: PMC4105200 DOI: 10.1038/nn.3410] [Citation(s) in RCA: 193] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2013] [Accepted: 03/18/2013] [Indexed: 02/07/2023]
Abstract
The study of perceptual decision-making offers insight into how the brain uses complex, sometimes ambiguous information to guide actions. Understanding the underlying processes and their neural bases requires that one pair recordings and manipulations of neural activity with rigorous psychophysics. Though this research has been traditionally performed in primates, it seems increasingly promising to pursue it at least partly in mice and rats. However, rigorous psychophysical methods are not yet as developed for these rodents as they are for primates. Here we give a brief overview of the sensory capabilities of rodents and of their cortical areas devoted to sensation and decision. We then review methods of psychophysics, focusing on the technical issues that arise in their implementation in rodents. These methods represent a rich set of challenges and opportunities.
Collapse
Affiliation(s)
- Matteo Carandini
- UCL Institute of Ophthalmology, University College London, London, UK
| | | |
Collapse
|
40
|
Abstract
The ability to recognize objects despite substantial variation in their appearance (e.g., because of position or size changes) represents such a formidable computational feat that it is widely assumed to be unique to primates. Such an assumption has restricted the investigation of its neuronal underpinnings to primate studies, which allow only a limited range of experimental approaches. In recent years, the increasingly powerful array of optical and molecular tools that has become available in rodents has spurred a renewed interest for rodent models of visual functions. However, evidence of primate-like visual object processing in rodents is still very limited and controversial. Here we show that rats are capable of an advanced recognition strategy, which relies on extracting the most informative object features across the variety of viewing conditions the animals may face. Rat visual strategy was uncovered by applying an image masking method that revealed the features used by the animals to discriminate two objects across a range of sizes, positions, in-depth, and in-plane rotations. Noticeably, rat recognition relied on a combination of multiple features that were mostly preserved across the transformations the objects underwent, and largely overlapped with the features that a simulated ideal observer deemed optimal to accomplish the discrimination task. These results indicate that rats are able to process and efficiently use shape information, in a way that is largely tolerant to variation in object appearance. This suggests that their visual system may serve as a powerful model to study the neuronal substrates of object recognition.
Collapse
|
41
|
Evidence that primary visual cortex is required for image, orientation, and motion discrimination by rats. PLoS One 2013; 8:e56543. [PMID: 23441202 PMCID: PMC3575509 DOI: 10.1371/journal.pone.0056543] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2012] [Accepted: 01/14/2013] [Indexed: 11/28/2022] Open
Abstract
The pigmented Long-Evans rat has proven to be an excellent subject for studying visually guided behavior including quantitative visual psychophysics. This observation, together with its experimental accessibility and its close homology to the mouse, has made it an attractive model system in which to dissect the thalamic and cortical circuits underlying visual perception. Given that visually guided behavior in the absence of primary visual cortex has been described in the literature, however, it is an empirical question whether specific visual behaviors will depend on primary visual cortex in the rat. Here we tested the effects of cortical lesions on performance of two-alternative forced-choice visual discriminations by Long-Evans rats. We present data from one highly informative subject that learned several visual tasks and then received a bilateral lesion ablating >90% of primary visual cortex. After the lesion, this subject had a profound and persistent deficit in complex image discrimination, orientation discrimination, and full-field optic flow motion discrimination, compared with both pre-lesion performance and sham-lesion controls. Performance was intact, however, on another visual two-alternative forced-choice task that required approaching a salient visual target. A second highly informative subject learned several visual tasks prior to receiving a lesion ablating >90% of medial extrastriate cortex. This subject showed no impairment on any of the four task categories. Taken together, our data provide evidence that these image, orientation, and motion discrimination tasks require primary visual cortex in the Long-Evans rat, whereas approaching a salient visual target does not.
Collapse
|
42
|
Diamond ME, Arabzadeh E. Whisker sensory system - from receptor to decision. Prog Neurobiol 2012; 103:28-40. [PMID: 22683381 DOI: 10.1016/j.pneurobio.2012.05.013] [Citation(s) in RCA: 87] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2012] [Revised: 05/11/2012] [Accepted: 05/15/2012] [Indexed: 11/30/2022]
Abstract
One of the great challenges of systems neuroscience is to understand how the neocortex transforms neuronal representations of the physical characteristics of sensory stimuli into the percepts which can guide the animal's decisions. Here we present progress made in understanding behavioral and neurophysiological aspects of a highly efficient sensory apparatus, the rat whisker system. Beginning with the 1970s discovery of "barrels" in the rat and mouse brain, one line of research has focused on unraveling the circuits that transmit information from the whiskers to the sensory cortex, together with the cellular mechanisms that underlie sensory responses. A second, more recent line of research has focused on tactile psychophysics, that is, quantification of the behavioral capacities supported by whisker sensation. The opportunity to join these two lines of investigation makes whisker-mediated sensation an exciting platform for the study of the neuronal bases of perception and decision-making. Even more appealing is the beginning-to-end prospective offered by this system: the inquiry can start at the level of the sensory receptor and conclude with the animal's choice. We argue that rats can switch between two modes of operation of the whisker sensory system: (1) generative mode and (2) receptive mode. In the generative mode, the rat moves its whiskers forward and backward to actively seek contact with objects and to palpate the object after initial contact. In the receptive mode, the rat immobilizes its whiskers to optimize the collection of signals from an object that is moving by its own power. We describe behavioral tasks that rats perform in these different modes. Next, we explore which neuronal codes in sensory cortex account for the rats' discrimination capacities. Finally, we present hypotheses for mechanisms through which "downstream" brain regions may read out the activity of sensory cortex in order to extract the significance of sensory stimuli and, ultimately, to select the appropriate action.
Collapse
Affiliation(s)
- Mathew E Diamond
- Cognitive Neuroscience Sector, International School for Advanced Studies, Trieste, Italy.
| | | |
Collapse
|
43
|
Abstract
Mounting evidence suggests that 'core object recognition,' the ability to rapidly recognize objects despite substantial appearance variation, is solved in the brain via a cascade of reflexive, largely feedforward computations that culminate in a powerful neuronal representation in the inferior temporal cortex. However, the algorithm that produces this solution remains poorly understood. Here we review evidence ranging from individual neurons and neuronal populations to behavior and computational models. We propose that understanding this algorithm will require using neuronal and psychophysical data to sift through many computational models, each based on building blocks of small, canonical subnetworks with a common functional goal.
Collapse
Affiliation(s)
- James J DiCarlo
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
| | | | | |
Collapse
|