1
|
Kim NI, Chen J, Wang W, Kim JY, Kwon MK, Moradnia M, Pouladi S, Ryou JH. Skin-Attached Arrayed Piezoelectric Sensors for Continuous and Safe Monitoring of Oculomotor Movements. Adv Healthc Mater 2024; 13:e2303581. [PMID: 38386698 DOI: 10.1002/adhm.202303581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 02/08/2024] [Indexed: 02/24/2024]
Abstract
Abnormal oculomotor movements are known to be linked to various types of brain disorders, physical/mental shocks to the brain, and other neurological disorders, hence its monitoring can be developed into a simple but effective diagnostic tool. To overcome the limitations in the current eye-tracking system and electrooculography, a piezoelectric arrayed sensor system is developed using single-crystalline III-N thin-film transducers, which offers advantages of mechanical flexibility, biocompatibility, and high electromechanical conversion, for continuous monitoring of oculomotor movements by skin-attachable, safe, and highly sensitive sensors. The flexible piezoelectric eye movement sensor array (F-PEMSA), consisting of three transducers, is attached to the face temple area where it can be comfortably wearable and can detect the muscles' activity associated with the eye motions. Output voltages from upper, mid, and lower sensors (transducers) on different temple areas generate discernable patterns of output voltage signals with different combinations of positive/negative signs and their relative magnitudes for the various movements of eyeballs including 8 directional (lateral, vertical, and diagonal) and two rotational movements, which enable various types of saccade and pursuit tests. The F-PEMSA can be used in clinical studies on the brain-eye relationship to evaluate the functional integrity of multiple brain systems and cognitive processes.
Collapse
Affiliation(s)
- Nam-In Kim
- Department of Mechanical Engineering, University of Houston, Houston, TX, 77204-2004, USA
- Materials Science and Engineering Program, University of Houston, Houston, TX, 77204, USA
- Advanced Manufacturing Institute (AMI), University of Houston, Houston, TX, 77204, USA
- Texas Center for Superconductivity at UH (TcSUH), University of Houston, Houston, TX, 77204, USA
| | - Jie Chen
- Department of Mechanical Engineering, University of Houston, Houston, TX, 77204-2004, USA
- Materials Science and Engineering Program, University of Houston, Houston, TX, 77204, USA
- Advanced Manufacturing Institute (AMI), University of Houston, Houston, TX, 77204, USA
- Texas Center for Superconductivity at UH (TcSUH), University of Houston, Houston, TX, 77204, USA
| | - Weijie Wang
- Department of Mechanical Engineering, University of Houston, Houston, TX, 77204-2004, USA
- Advanced Manufacturing Institute (AMI), University of Houston, Houston, TX, 77204, USA
- Texas Center for Superconductivity at UH (TcSUH), University of Houston, Houston, TX, 77204, USA
| | - Ja-Yeon Kim
- Korea Photonics Technology Institute (KOPTI), Gwangju, 61007, Republic of Korea
| | - Min-Ki Kwon
- Department of Photonic Engineering, Chosun University, Gwangju, 61452, Republic of Korea
| | - Mina Moradnia
- Department of Mechanical Engineering, University of Houston, Houston, TX, 77204-2004, USA
- Advanced Manufacturing Institute (AMI), University of Houston, Houston, TX, 77204, USA
- Texas Center for Superconductivity at UH (TcSUH), University of Houston, Houston, TX, 77204, USA
| | - Sara Pouladi
- Department of Mechanical Engineering, University of Houston, Houston, TX, 77204-2004, USA
- Advanced Manufacturing Institute (AMI), University of Houston, Houston, TX, 77204, USA
- Texas Center for Superconductivity at UH (TcSUH), University of Houston, Houston, TX, 77204, USA
| | - Jae-Hyun Ryou
- Department of Mechanical Engineering, University of Houston, Houston, TX, 77204-2004, USA
- Materials Science and Engineering Program, University of Houston, Houston, TX, 77204, USA
- Advanced Manufacturing Institute (AMI), University of Houston, Houston, TX, 77204, USA
- Texas Center for Superconductivity at UH (TcSUH), University of Houston, Houston, TX, 77204, USA
- Department of Electrical & Computer Engineering, University of Houston, Houston, TX, 77204, USA
| |
Collapse
|
2
|
Pelgrim MH, Espinosa J, Buchsbaum D. Head-mounted mobile eye-tracking in the domestic dog: A new method. Behav Res Methods 2023; 55:1924-1941. [PMID: 35788974 PMCID: PMC9255465 DOI: 10.3758/s13428-022-01907-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/14/2022] [Indexed: 11/08/2022]
Abstract
Humans rely on dogs for countless tasks, ranging from companionship to highly specialized detection work. In their daily lives, dogs must navigate a human-built visual world, yet comparatively little is known about what dogs visually attend to as they move through their environment. Real-world eye-tracking, or head-mounted eye-tracking, allows participants to freely move through their environment, providing more naturalistic results about visual attention while interacting with objects and agents. In dogs, real-world eye-tracking has the potential to inform our understanding of cross-species cognitive abilities as well as working dog training; however, a robust and easily deployed head-mounted eye-tracking method for dogs has not previously been developed and tested. We present a novel method for real-world eye-tracking in dogs, using a simple head-mounted mobile apparatus mounted onto goggles designed for dogs. This new method, adapted from systems that are widely used in humans, allows for eye-tracking during more naturalistic behaviors, namely walking around and interacting with real-world stimuli, as well as reduced training time as compared to traditional stationary eye-tracking methods. We found that while completing a simple forced-choice treat finding task, dogs look primarily to the treat, and we demonstrated the accuracy of this method using alternative gaze-tracking methods. Additionally, eye-tracking revealed more fine-grained time course information and individual differences in looking patterns.
Collapse
Affiliation(s)
- Madeline H Pelgrim
- Department of Cognitive, Linguistic, & Psychological Sciences, Brown University, 190 Thayer St, Providence, RI, 02912, USA.
- Department of Psychology, University of Toronto, 100 St. George St, Toronto, ON, M5S 3G3, Canada.
| | - Julia Espinosa
- Department of Psychology, University of Toronto, 100 St. George St, Toronto, ON, M5S 3G3, Canada
- Department of Human Evolutionary Biology, Harvard University, 11 Divinity Ave, Cambridge, MA, 02138, USA
| | - Daphna Buchsbaum
- Department of Cognitive, Linguistic, & Psychological Sciences, Brown University, 190 Thayer St, Providence, RI, 02912, USA
- Department of Psychology, University of Toronto, 100 St. George St, Toronto, ON, M5S 3G3, Canada
| |
Collapse
|
3
|
Harris SC, Dunn FA. Asymmetric retinal direction tuning predicts optokinetic eye movements across stimulus conditions. eLife 2023; 12:e81780. [PMID: 36930180 PMCID: PMC10023158 DOI: 10.7554/elife.81780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 02/02/2023] [Indexed: 03/18/2023] Open
Abstract
Across species, the optokinetic reflex (OKR) stabilizes vision during self-motion. OKR occurs when ON direction-selective retinal ganglion cells (oDSGCs) detect slow, global image motion on the retina. How oDSGC activity is integrated centrally to generate behavior remains unknown. Here, we discover mechanisms that contribute to motion encoding in vertically tuned oDSGCs and leverage these findings to empirically define signal transformation between retinal output and vertical OKR behavior. We demonstrate that motion encoding in vertically tuned oDSGCs is contrast-sensitive and asymmetric for oDSGC types that prefer opposite directions. These phenomena arise from the interplay between spike threshold nonlinearities and differences in synaptic input weights, including shifts in the balance of excitation and inhibition. In behaving mice, these neurophysiological observations, along with a central subtraction of oDSGC outputs, accurately predict the trajectories of vertical OKR across stimulus conditions. Thus, asymmetric tuning across competing sensory channels can critically shape behavior.
Collapse
Affiliation(s)
- Scott C Harris
- Department of Ophthalmology, University of California, San FranciscoSan FranciscoUnited States
- Neuroscience Graduate Program, University of California, San FranciscoSan FranciscoUnited States
| | - Felice A Dunn
- Department of Ophthalmology, University of California, San FranciscoSan FranciscoUnited States
| |
Collapse
|
4
|
The application of noninvasive, restraint-free eye-tracking methods for use with nonhuman primates. Behav Res Methods 2021; 53:1003-1030. [PMID: 32935327 DOI: 10.3758/s13428-020-01465-6] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
Over the past 50 years there has been a strong interest in applying eye-tracking techniques to study a myriad of questions related to human and nonhuman primate psychological processes. Eye movements and fixations can provide qualitative and quantitative insights into cognitive processes of nonverbal populations such as nonhuman primates, clarifying the evolutionary, physiological, and representational underpinnings of human cognition. While early attempts at nonhuman primate eye tracking were relatively crude, later, more sophisticated and sensitive techniques required invasive protocols and the use of restraint. In the past decade, technology has advanced to a point where noninvasive eye-tracking techniques, developed for use with human participants, can be applied for use with nonhuman primates in a restraint-free manner. Here we review the corpus of recent studies (N=32) that take such an approach. Despite the growing interest in eye-tracking research, there is still little consensus on "best practices," both in terms of deploying test protocols or reporting methods and results. Therefore, we look to advances made in the field of developmental psychology, as well as our own collective experiences using eye trackers with nonhuman primates, to highlight key elements that researchers should consider when designing noninvasive restraint-free eye-tracking research protocols for use with nonhuman primates. Beyond promoting best practices for research protocols, we also outline an ideal approach for reporting such research and highlight future directions for the field.
Collapse
|
5
|
Ivanchenko D, Rifai K, Hafed ZM, Schaeffel F. A low-cost, high-performance video-based binocular eye tracker for psychophysical research. J Eye Mov Res 2021; 14. [PMID: 34122750 PMCID: PMC8190563 DOI: 10.16910/jemr.14.3.3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
We describe a high-performance, pupil-based binocular eye tracker that approaches the performance
of a well-established commercial system, but at a fraction of the cost. The eye
tracker is built from standard hardware components, and its software (written in Visual C++)
can be easily implemented. Because of its fast and simple linear calibration scheme, the eye
tracker performs best in the central 10 degrees of the visual field. The eye tracker possesses
a number of useful features: (1) automated calibration simultaneously in both eyes while
subjects fixate four fixation points sequentially on a computer screen, (2) automated realtime
continuous analysis of measurement noise, (3) automated blink detection, (4) and realtime
analysis of pupil centration artifacts. This last feature is critical because it is known
that pupil diameter changes can be erroneously registered by pupil-based trackers as a
change in eye position. We evaluated the performance of our system against that of a wellestablished
commercial system using simultaneous measurements in 10 participants. We
propose our low-cost eye tracker as a promising resource for studies of binocular eye movements.
Collapse
|
6
|
Winsor AM, Pagoti GF, Daye DJ, Cheries EW, Cave KR, Jakob EM. What gaze direction can tell us about cognitive processes in invertebrates. Biochem Biophys Res Commun 2021; 564:43-54. [PMID: 33413978 DOI: 10.1016/j.bbrc.2020.12.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 11/30/2020] [Accepted: 12/01/2020] [Indexed: 01/29/2023]
Abstract
Most visually guided animals shift their gaze using body movements, eye movements, or both to gather information selectively from their environments. Psychological studies of eye movements have advanced our understanding of perceptual and cognitive processes that mediate visual attention in humans and other vertebrates. However, much less is known about how these processes operate in other organisms, particularly invertebrates. We here make the case that studies of invertebrate cognition can benefit by adding precise measures of gaze direction. To accomplish this, we briefly review the human visual attention literature and outline four research themes and several experimental paradigms that could be extended to invertebrates. We briefly review selected studies where the measurement of gaze direction in invertebrates has provided new insights, and we suggest future areas of exploration.
Collapse
Affiliation(s)
- Alex M Winsor
- Graduate Program in Organismic and Evolutionary Biology, University of Massachusetts Amherst, Amherst, MA, 01003, USA.
| | - Guilherme F Pagoti
- Programa de Pós-Graduação em Zoologia, Instituto de Biociências, Universidade de São Paulo, Rua do Matão, 321, Travessa 14, Cidade Universitária, São Paulo, SP, 05508-090, Brazil
| | - Daniel J Daye
- Department of Biology, University of Massachusetts Amherst, Amherst, MA, 01003, USA; Graduate Program in Biological and Environmental Sciences, University of Rhode Island, Kingston, RI, 02881, USA
| | - Erik W Cheries
- Department of Psychological and Brain Sciences, University of Massachusetts Amherst, Amherst, MA, 01003, USA
| | - Kyle R Cave
- Department of Psychological and Brain Sciences, University of Massachusetts Amherst, Amherst, MA, 01003, USA
| | - Elizabeth M Jakob
- Department of Biology, University of Massachusetts Amherst, Amherst, MA, 01003, USA.
| |
Collapse
|
7
|
Inception loops discover what excites neurons most using deep predictive models. Nat Neurosci 2019; 22:2060-2065. [PMID: 31686023 DOI: 10.1038/s41593-019-0517-x] [Citation(s) in RCA: 74] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Accepted: 09/16/2019] [Indexed: 11/09/2022]
Abstract
Finding sensory stimuli that drive neurons optimally is central to understanding information processing in the brain. However, optimizing sensory input is difficult due to the predominantly nonlinear nature of sensory processing and high dimensionality of the input. We developed 'inception loops', a closed-loop experimental paradigm combining in vivo recordings from thousands of neurons with in silico nonlinear response modeling. Our end-to-end trained, deep-learning-based model predicted thousands of neuronal responses to arbitrary, new natural input with high accuracy and was used to synthesize optimal stimuli-most exciting inputs (MEIs). For mouse primary visual cortex (V1), MEIs exhibited complex spatial features that occurred frequently in natural scenes but deviated strikingly from the common notion that Gabor-like stimuli are optimal for V1. When presented back to the same neurons in vivo, MEIs drove responses significantly better than control stimuli. Inception loops represent a widely applicable technique for dissecting the neural mechanisms of sensation.
Collapse
|
8
|
Puri I, Cox DD. A System for Accurate Tracking and Video Recordings of Rodent Eye Movements using Convolutional Neural Networks for Biomedical Image Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:3590-3593. [PMID: 30441154 DOI: 10.1109/embc.2018.8513072] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Research in neuroscience and vision science relies heavily on careful measurements of animal subject's gaze direction. Rodents are the most widely studied animal subjects for such research because of their economic advantage and hardiness. Recently, video based eye trackers that use image processing techniques have become a popular option for gaze tracking because they are easy to use and are completely noninvasive. Although significant progress has been made in improving the accuracy and robustness of eye tracking algorithms, unfortunately, almost all of the techniques have focused on human eyes, which does not account for the unique characteristics of the rodent eye images, e.g., variability in eye parameters, abundance of surrounding hair, and their small size. To overcome these unique challenges, this work presents a flexible, robust, and highly accurate model for pupil and corneal reflection identification in rodent gaze determination that can be incrementally trained to account for variability in eye parameters encountered in the field. To the best of our knowledge, this is the first paper that demonstrates a highly accurate and practical biomedical image segmentation based convolutional neural network architecture for pupil and corneal reflection identification in eye images. This new method, in conjunction with our automated infrared videobased eye recording system, offers the state of the art technology in eye tracking for neuroscience and vision science research for rodents.
Collapse
|
9
|
Vanzella W, Grion N, Bertolini D, Perissinotto A, Gigante M, Zoccolan D. A passive, camera-based head-tracking system for real-time, three-dimensional estimation of head position and orientation in rodents. J Neurophysiol 2019; 122:2220-2242. [PMID: 31553687 DOI: 10.1152/jn.00301.2019] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Tracking head position and orientation in small mammals is crucial for many applications in the field of behavioral neurophysiology, from the study of spatial navigation to the investigation of active sensing and perceptual representations. Many approaches to head tracking exist, but most of them only estimate the 2D coordinates of the head over the plane where the animal navigates. Full reconstruction of the pose of the head in 3D is much more more challenging and has been achieved only in handful of studies, which employed headsets made of multiple LEDs or inertial units. However, these assemblies are rather bulky and need to be powered to operate, which prevents their application in wireless experiments and in the small enclosures often used in perceptual studies. Here we propose an alternative approach, based on passively imaging a lightweight, compact, 3D structure, painted with a pattern of black dots over a white background. By applying a cascade of feature extraction algorithms that progressively refine the detection of the dots and reconstruct their geometry, we developed a tracking method that is highly precise and accurate, as assessed through a battery of validation measurements. We show that this method can be used to study how a rat samples sensory stimuli during a perceptual discrimination task and how a hippocampal place cell represents head position over extremely small spatial scales. Given its minimal encumbrance and wireless nature, our method could be ideal for high-throughput applications, where tens of animals need to be simultaneously and continuously tracked.NEW & NOTEWORTHY Head tracking is crucial in many behavioral neurophysiology studies. Yet reconstruction of the head's pose in 3D is challenging and typically requires implanting bulky, electrically powered headsets that prevent wireless experiments and are hard to employ in operant boxes. Here we propose an alternative approach, based on passively imaging a compact, 3D dot pattern that, once implanted over the head of a rodent, allows estimating the pose of its head with high precision and accuracy.
Collapse
Affiliation(s)
- Walter Vanzella
- Visual Neuroscience Laboratory, International School for Advanced Studies (SISSA), Trieste, Italy.,Glance Vision Technologies, Trieste, Italy
| | - Natalia Grion
- Visual Neuroscience Laboratory, International School for Advanced Studies (SISSA), Trieste, Italy
| | - Daniele Bertolini
- Visual Neuroscience Laboratory, International School for Advanced Studies (SISSA), Trieste, Italy
| | - Andrea Perissinotto
- Visual Neuroscience Laboratory, International School for Advanced Studies (SISSA), Trieste, Italy.,Glance Vision Technologies, Trieste, Italy
| | - Marco Gigante
- Mechatronics Lab, International School for Advanced Studies (SISSA), Trieste, Italy
| | - Davide Zoccolan
- Visual Neuroscience Laboratory, International School for Advanced Studies (SISSA), Trieste, Italy
| |
Collapse
|
10
|
Visual Motion and Form Integration in the Behaving Ferret. eNeuro 2019; 6:ENEURO.0228-19.2019. [PMID: 31371456 PMCID: PMC6709227 DOI: 10.1523/eneuro.0228-19.2019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2019] [Revised: 07/10/2019] [Accepted: 07/14/2019] [Indexed: 11/21/2022] Open
Abstract
Ferrets have become a standard animal model for the development of early visual stages. Less is known about higher-level vision in ferrets, both during development and in adulthood. Here, as a step towards establishing higher-level vision research in ferrets, we used behavioral experiments to test the motion and form integration capacity of adult ferrets. Motion integration was assessed by training ferrets to discriminate random dot kinematograms (RDK) based on their direction. Task difficulty was varied systematically by changing RDK coherence levels, which allowed the measurement of motion integration thresholds. Form integration was measured analogously by training ferrets to discriminate linear Glass patterns of varying coherence levels based on their orientation. In all experiments, ferrets proved to be good psychophysical subjects that performed tasks reliably. Crucially, the behavioral data showed clear evidence of perceptual motion and form integration. In the monkey, motion and form integration are usually associated with processes occurring in higher-level visual areas. In a second set of experiments, we therefore tested whether PSS, a higher-level motion area in the ferret, could similarly support motion integration behavior in this species. To this end, we measured responses of PSS neurons to RDK of different coherence levels. Indeed, neurometric functions for PSS were in good agreement with the behaviorally derived psychometric functions. In conclusion, our experiments demonstrate that ferrets are well suited for higher-level vision research.
Collapse
|
11
|
Demin KA, Sysoev M, Chernysh MV, Savva AK, Koshiba M, Wappler-Guzzetta EA, Song C, De Abreu MS, Leonard B, Parker MO, Harvey BH, Tian L, Vasar E, Strekalova T, Amstislavskaya TG, Volgin AD, Alpyshov ET, Wang D, Kalueff AV. Animal models of major depressive disorder and the implications for drug discovery and development. Expert Opin Drug Discov 2019; 14:365-378. [PMID: 30793996 DOI: 10.1080/17460441.2019.1575360] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
INTRODUCTION Depression is a highly debilitating psychiatric disorder that affects the global population and causes severe disabilities and suicide. Depression pathogenesis remains poorly understood, and the disorder is often treatment-resistant and recurrent, necessitating the development of novel therapies, models and concepts in this field. Areas covered: Animal models are indispensable for translational biological psychiatry, and markedly advance the study of depression. Novel approaches continuously emerge that may help untangle the disorder heterogeneity and unclear categories of disease classification systems. Some of these approaches include widening the spectrum of model species used for translational research, using a broader range of test paradigms, exploring new pathogenic pathways and biomarkers, and focusing more closely on processes beyond neural cells (e.g. glial, inflammatory and metabolic deficits). Expert opinion: Dividing the core symptoms into easily translatable, evolutionarily conserved phenotypes is an effective way to reevaluate current depression modeling. Conceptually novel approaches based on the endophenotype paradigm, cross-species trait genetics and 'domain interplay concept', as well as using a wider spectrum of model organisms and target systems will enhance experimental modeling of depression and antidepressant drug discovery.
Collapse
Affiliation(s)
- Konstantin A Demin
- a Institute of Experimental Medicine , Almazov National Medical Research Centre , St. Petersburg , Russia.,b Institute of Translational Biomedicine , St. Petersburg State University , St. Petersburg , Russia
| | - Maxim Sysoev
- c Laboratory of Preclinical Bioscreening , Russian Research Center for Radiology and Surgical Technologies , St. Petersburg , Russia.,d Institute of Experimental Medicine , St. Petersburg , Russia
| | - Maria V Chernysh
- b Institute of Translational Biomedicine , St. Petersburg State University , St. Petersburg , Russia
| | - Anna K Savva
- e Faculty of Biology , St. Petersburg State University , St. Petersburg , Russia
| | | | | | - Cai Song
- h Research Institute of Marine Drugs and Nutrition , Guangdong Ocean University , Zhanjiang , China.,i Marine Medicine Development Center, Shenzhen Institute , Guangdong Ocean University , Shenzhen , China
| | - Murilo S De Abreu
- j Bioscience Institute , University of Passo Fundo (UPF) , Passo Fundo , Brazil
| | | | - Matthew O Parker
- l Brain and Behaviour Lab , School of Pharmacy and Biomedical Science, University of Portsmouth , Portsmouth , UK
| | - Brian H Harvey
- m Center of Excellence for Pharmaceutical Sciences , Division of Pharmacology, School of Pharmacy, North-West University , Potchefstroom , South Africa
| | - Li Tian
- n Institute of Biomedicine and Translational Medicine , University of Tartu , Tartu , Estonia
| | - Eero Vasar
- n Institute of Biomedicine and Translational Medicine , University of Tartu , Tartu , Estonia
| | - Tatyana Strekalova
- o Laboratory of Psychiatric Neurobiology, Institute of Molecular Medicine, and Department of Normal Physiology , Sechenov First Moscow State Medical University , Moscow , Russia.,p Laboratory of Cognitive Dysfunctions , Institute of General Pathology and Pathophysiology , Moscow , Russia.,q Department of Neuroscience , Maastricht University , Maastricht , The Netherlands
| | | | - Andrey D Volgin
- g The International Zebrafish Neuroscience Research Consortium (ZNRC) , Slidell , LA , USA.,r Scientific Research Institute of Physiology and Basic Medicine , Novosibirsk , Russia
| | - Erik T Alpyshov
- s School of Pharmacy , Southwest University , Chongqing , China
| | - Dongmei Wang
- s School of Pharmacy , Southwest University , Chongqing , China
| | - Allan V Kalueff
- s School of Pharmacy , Southwest University , Chongqing , China.,t Almazov National Medical Research Centre , St. Petersburg , Russia.,u Ural Federal University , Ekaterinburg , Russia.,v Granov Russian Research Center of Radiology and Surgical Technologies , St. Petersburg , Russia.,w Laboratory of Biological Psychiatry, Institute of Translational Biomedicine , St. Petersburg State University , St. Petersburg , Russia.,x Laboratory of Translational Biopsychiatry , Scientific Research Institute of Physiology and Basic Medicine , Novosibirsk , Russia.,y ZENEREI Institute , Slidell , LA , USA.,z The International Stress and Behavior Society (ISBS), US HQ , New Orleans , LA , USA
| |
Collapse
|
12
|
Voltage-Dependent Membrane Properties Shape the Size But Not the Frequency Content of Spontaneous Voltage Fluctuations in Layer 2/3 Somatosensory Cortex. J Neurosci 2019; 39:2221-2237. [PMID: 30655351 DOI: 10.1523/jneurosci.1648-18.2019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2018] [Revised: 12/30/2018] [Accepted: 01/09/2019] [Indexed: 01/18/2023] Open
Abstract
Under awake and idling conditions, spontaneous intracellular membrane voltage is characterized by large, synchronous, low-frequency fluctuations. Although these properties reflect correlations in synaptic inputs, intrinsic membrane properties often indicate voltage-dependent changes in membrane resistance and time constant values that can amplify and help to generate low-frequency voltage fluctuations. The specific contribution of intrinsic and synaptic factors to the generation of spontaneous fluctuations, however, remains poorly understood. Using visually guided intracellular recordings of somatosensory layer 2/3 pyramidal cells and interneurons in awake male and female mice, we measured the spectrum and size of voltage fluctuation and intrinsic cellular properties at different voltages. In both cell types, depolarizing neurons increased the size of voltage fluctuations. Amplitude changes scaled with voltage-dependent changes in membrane input resistance. Because of the small membrane time constants observed in both pyramidal cells and interneuron cell bodies, the low-frequency content of membrane fluctuations reflects correlations in the synaptic current inputs rather than significant filtering associated with membrane capacitance. Further, blocking synaptic inputs minimally altered somatic membrane resistance and time constant values. Overall, these results indicate that spontaneous synaptic inputs generate a low-conductance state in which the amplitude, but not frequency structure, is influenced by intrinsic membrane properties.SIGNIFICANCE STATEMENT In the absence of sensory drive, cortical activity in awake animals is associated with self-generated and seemingly random membrane voltage fluctuations characterized by large amplitude and low frequency. Partially, these properties reflect correlations in synaptic input. Nonetheless, neurons express voltage-dependent intrinsic properties that can potentially influence the amplitude and frequency of spontaneous activity. Using visually guided intracellular recordings of cortical neurons in awake mice, we measured the voltage dependence of spontaneous voltage fluctuations and intrinsic membrane properties. We show that voltage-dependent changes in membrane resistance amplify synaptic activity, whereas the frequency of voltage fluctuations reflects correlations in synaptic inputs. Last, synaptic activity has a small impact on intrinsic membrane properties in both pyramidal cells and interneurons.
Collapse
|
13
|
Goltstein PM, Meijer GT, Pennartz CM. Conditioning sharpens the spatial representation of rewarded stimuli in mouse primary visual cortex. eLife 2018; 7:37683. [PMID: 30222107 PMCID: PMC6141231 DOI: 10.7554/elife.37683] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2018] [Accepted: 08/29/2018] [Indexed: 11/13/2022] Open
Abstract
Reward is often employed as reinforcement in behavioral paradigms but it is unclear how the visuospatial aspect of a stimulus-reward association affects the cortical representation of visual space. Using a head-fixed paradigm, we conditioned mice to associate the same visual pattern in adjacent retinotopic regions with availability and absence of reward. Time-lapse intrinsic optical signal imaging under anesthesia showed that conditioning increased the spatial separation of mesoscale cortical representations of reward predicting- and non-reward predicting stimuli. Subsequent in vivo two-photon calcium imaging revealed that this improved separation correlated with enhanced population coding for retinotopic location, specifically for the trained orientation and spatially confined to the V1 region where rewarded and non-rewarded stimulus representations bordered. These results are corroborated by conditioning-induced differences in the correlation structure of population activity. Thus, the cortical representation of visual space is sharpened as consequence of associative stimulus-reward learning while the overall retinotopic map remains unaltered.
Collapse
Affiliation(s)
- Pieter M Goltstein
- Center for Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands.,Research Priority Program Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| | - Guido T Meijer
- Center for Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands.,Research Priority Program Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| | - Cyriel Ma Pennartz
- Center for Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands.,Research Priority Program Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
14
|
Hong G, Fu TM, Qiao M, Viveros RD, Yang X, Zhou T, Lee JM, Park HG, Sanes JR, Lieber CM. A method for single-neuron chronic recording from the retina in awake mice. Science 2018; 360:1447-1451. [PMID: 29954976 DOI: 10.1126/science.aas9160] [Citation(s) in RCA: 101] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Accepted: 04/26/2018] [Indexed: 12/16/2022]
Abstract
The retina, which processes visual information and sends it to the brain, is an excellent model for studying neural circuitry. It has been probed extensively ex vivo but has been refractory to chronic in vivo electrophysiology. We report a nonsurgical method to achieve chronically stable in vivo recordings from single retinal ganglion cells (RGCs) in awake mice. We developed a noncoaxial intravitreal injection scheme in which injected mesh electronics unrolls inside the eye and conformally coats the highly curved retina without compromising normal eye functions. The method allows 16-channel recordings from multiple types of RGCs with stable responses to visual stimuli for at least 2 weeks, and reveals circadian rhythms in RGC responses over multiple day/night cycles.
Collapse
Affiliation(s)
- Guosong Hong
- Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA
| | - Tian-Ming Fu
- Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA
| | - Mu Qiao
- Center for Brain Science and Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
| | - Robert D Viveros
- John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Xiao Yang
- Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA
| | - Tao Zhou
- Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA
| | - Jung Min Lee
- Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA.,Department of Physics, Korea University, Seoul, Republic of Korea
| | - Hong-Gyu Park
- Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA.,Department of Physics, Korea University, Seoul, Republic of Korea
| | - Joshua R Sanes
- Center for Brain Science and Department of Molecular and Cellular Biology, Harvard University, Cambridge, MA, USA
| | - Charles M Lieber
- Department of Chemistry and Chemical Biology, Harvard University, Cambridge, MA, USA. .,John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| |
Collapse
|
15
|
Marques T, Summers MT, Fioreze G, Fridman M, Dias RF, Feller MB, Petreanu L. A Role for Mouse Primary Visual Cortex in Motion Perception. Curr Biol 2018; 28:1703-1713.e6. [PMID: 29779878 PMCID: PMC5988967 DOI: 10.1016/j.cub.2018.04.012] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Revised: 03/01/2018] [Accepted: 04/04/2018] [Indexed: 12/16/2022]
Abstract
Visual motion is an ethologically important stimulus throughout the animal kingdom. In primates, motion perception relies on specific higher-order cortical regions. Although mouse primary visual cortex (V1) and higher-order visual areas show direction-selective (DS) responses, their role in motion perception remains unknown. Here, we tested whether V1 is involved in motion perception in mice. We developed a head-fixed discrimination task in which mice must report their perceived direction of motion from random dot kinematograms (RDKs). After training, mice made around 90% correct choices for stimuli with high coherence and performed significantly above chance for 16% coherent RDKs. Accuracy increased with both stimulus duration and visual field coverage of the stimulus, suggesting that mice in this task integrate motion information in time and space. Retinal recordings showed that thalamically projecting On-Off DS ganglion cells display DS responses when stimulated with RDKs. Two-photon calcium imaging revealed that neurons in layer (L) 2/3 of V1 display strong DS tuning in response to this stimulus. Thus, RDKs engage motion-sensitive retinal circuits as well as downstream visual cortical areas. Contralateral V1 activity played a key role in this motion direction discrimination task because its reversible inactivation with muscimol led to a significant reduction in performance. Neurometric-psychometric comparisons showed that an ideal observer could solve the task with the information encoded in DS L2/3 neurons. Motion discrimination of RDKs presents a powerful behavioral tool for dissecting the role of retino-forebrain circuits in motion processing.
Collapse
Affiliation(s)
- Tiago Marques
- Champalimaud Research, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Mathew T Summers
- Department of Molecular and Cell Biology and Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Gabriela Fioreze
- Champalimaud Research, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Marina Fridman
- Champalimaud Research, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Rodrigo F Dias
- Champalimaud Research, Champalimaud Centre for the Unknown, Lisbon, Portugal
| | - Marla B Feller
- Department of Molecular and Cell Biology and Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Leopoldo Petreanu
- Champalimaud Research, Champalimaud Centre for the Unknown, Lisbon, Portugal.
| |
Collapse
|
16
|
Kaliukhovich DA, Op de Beeck H. Hierarchical stimulus processing in rodent primary and lateral visual cortex as assessed through neuronal selectivity and repetition suppression. J Neurophysiol 2018; 120:926-941. [PMID: 29742022 DOI: 10.1152/jn.00673.2017] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Similar to primates, visual cortex in rodents appears to be organized in two distinct hierarchical streams. However, there is still little known about how visual information is processed along those streams in rodents. In this study, we examined how repetition suppression and position and clutter tolerance of the neuronal representations evolve along the putative ventral visual stream in rats. To address this question, we recorded multiunit spiking activity in primary visual cortex (V1) and the more downstream visual laterointermediate (LI) area of head-restrained Long-Evans rats. We employed a paradigm reminiscent of the continuous carry-over design used in human neuroimaging. In both areas, stimulus repetition attenuated the early phase of the neuronal response to the repeated stimulus, with this response suppression being greater in area LI. Furthermore, stimulus preferences were more similar across positions (position tolerance) in area LI than in V1, even though the absolute responses in both areas were very sensitive to changes in position. In contrast, the neuronal representations in both areas were equally good at tolerating the presence of limited visual clutter, as modeled by the presentation of a single flank stimulus. When probing tolerance of the neuronal representations with stimulus-specific adaptation, we detected no position tolerance in either examined brain area, whereas, on the contrary, we revealed clutter tolerance in both areas. Overall, our data demonstrate similarities and discrepancies in processing of visual information along the ventral visual stream of rodents and primates. Moreover, our results stress caution in using neuronal adaptation to probe tolerance of the neuronal representations. NEW & NOTEWORTHY Rodents are emerging as a popular animal model that complement primates for studying higher level visual functions. Similar to findings in primates, we demonstrate a greater repetition suppression and position tolerance of the neuronal representations in the downstream laterointermediate area of Long-Evans rats compared with primary visual cortex. However, we report no difference in the degree of clutter tolerance between the areas. These findings provide additional evidence for hierarchical processing of visual stimuli in rodents.
Collapse
Affiliation(s)
- Dzmitry A Kaliukhovich
- Laboratory of Biological Psychology, University of Leuven (KU Leuven) , Leuven , Belgium
| | - Hans Op de Beeck
- Laboratory of Biological Psychology, University of Leuven (KU Leuven) , Leuven , Belgium
| |
Collapse
|
17
|
Accuracy of Rats in Discriminating Visual Objects Is Explained by the Complexity of Their Perceptual Strategy. Curr Biol 2018; 28:1005-1015.e5. [PMID: 29551414 PMCID: PMC5887110 DOI: 10.1016/j.cub.2018.02.037] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Revised: 01/17/2018] [Accepted: 02/15/2018] [Indexed: 11/20/2022]
Abstract
Despite their growing popularity as models of visual functions, it remains unclear whether rodents are capable of deploying advanced shape-processing strategies when engaged in visual object recognition. In rats, for instance, pattern vision has been reported to range from mere detection of overall object luminance to view-invariant processing of discriminative shape features. Here we sought to clarify how refined object vision is in rodents, and how variable the complexity of their visual processing strategy is across individuals. To this aim, we measured how well rats could discriminate a reference object from 11 distractors, which spanned a spectrum of image-level similarity to the reference. We also presented the animals with random variations of the reference, and processed their responses to these stimuli to derive subject-specific models of rat perceptual choices. Our models successfully captured the highly variable discrimination performance observed across subjects and object conditions. In particular, they revealed that the animals that succeeded with the most challenging distractors were those that integrated the wider variety of discriminative features into their perceptual strategies. Critically, these strategies were largely preserved when the rats were required to discriminate outlined and scaled versions of the stimuli, thus showing that rat object vision can be characterized as a transformation-tolerant, feature-based filtering process. Overall, these findings indicate that rats are capable of advanced processing of shape information, and point to the rodents as powerful models for investigating the neuronal underpinnings of visual object recognition and other high-level visual functions. The ability of rats to discriminate visual objects varies greatly across subjects Such variability is accounted for by the diversity of rat perceptual strategies Animals building richer perceptual templates achieve higher accuracy Perceptual strategies remain largely invariant across object transformations
Collapse
|
18
|
Titchener SA, Shivdasani MN, Fallon JB, Petoe MA. Gaze Compensation as a Technique for Improving Hand-Eye Coordination in Prosthetic Vision. Transl Vis Sci Technol 2018; 7:2. [PMID: 29321945 PMCID: PMC5759363 DOI: 10.1167/tvst.7.1.2] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2017] [Accepted: 11/07/2017] [Indexed: 11/24/2022] Open
Abstract
Purpose Shifting the region-of-interest within the input image to compensate for gaze shifts (“gaze compensation”) may improve hand–eye coordination in visual prostheses that incorporate an external camera. The present study investigated the effects of eye movement on hand-eye coordination under simulated prosthetic vision (SPV), and measured the coordination benefits of gaze compensation. Methods Seven healthy-sighted subjects performed a target localization-pointing task under SPV. Three conditions were tested, modeling: retinally stabilized phosphenes (uncompensated); gaze compensation; and no phosphene movement (center-fixed). The error in pointing was quantified for each condition. Results Gaze compensation yielded a significantly smaller pointing error than the uncompensated condition for six of seven subjects, and a similar or smaller pointing error than the center-fixed condition for all subjects (two-way ANOVA, P < 0.05). Pointing error eccentricity and gaze eccentricity were moderately correlated in the uncompensated condition (azimuth: R2 = 0.47; elevation: R2 = 0.51) but not in the gaze-compensated condition (azimuth: R2 = 0.01; elevation: R2 = 0.00). Increased variability in gaze at the time of pointing was correlated with greater reduction in pointing error in the center-fixed condition compared with the uncompensated condition (R2 = 0.64). Conclusions Eccentric eye position impedes hand–eye coordination in SPV. While limiting eye eccentricity in uncompensated viewing can reduce errors, gaze compensation is effective in improving coordination for subjects unable to maintain fixation. Translational Relevance The results highlight the present necessity for suppressing eye movement and support the use of gaze compensation to improve hand–eye coordination and localization performance in prosthetic vision.
Collapse
Affiliation(s)
- Samuel A Titchener
- The Bionics Institute of Australia, East Melbourne, Australia.,Department of Medical Bionics, University of Melbourne, Parkville, Australia
| | - Mohit N Shivdasani
- The Bionics Institute of Australia, East Melbourne, Australia.,Department of Medical Bionics, University of Melbourne, Parkville, Australia
| | - James B Fallon
- The Bionics Institute of Australia, East Melbourne, Australia.,Department of Medical Bionics, University of Melbourne, Parkville, Australia
| | - Matthew A Petoe
- The Bionics Institute of Australia, East Melbourne, Australia.,Department of Medical Bionics, University of Melbourne, Parkville, Australia
| |
Collapse
|
19
|
Juavinett AL, Erlich JC, Churchland AK. Decision-making behaviors: weighing ethology, complexity, and sensorimotor compatibility. Curr Opin Neurobiol 2017; 49:42-50. [PMID: 29179005 DOI: 10.1016/j.conb.2017.11.001] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2017] [Revised: 10/31/2017] [Accepted: 11/01/2017] [Indexed: 01/15/2023]
Abstract
Rodent decision-making research aims to uncover the neural circuitry underlying the ability to evaluate alternatives and select appropriate actions. Designing behavioral paradigms that provide a solid foundation to ask questions about decision-making computations and mechanisms is a difficult and often underestimated challenge. Here, we propose three dimensions on which we can consider rodent decision-making tasks: ethological validity, task complexity, and stimulus-response compatibility. We review recent research through this lens, and provide practical guidance for researchers in the decision-making field.
Collapse
Affiliation(s)
| | - Jeffrey C Erlich
- NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China
| | - Anne K Churchland
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, United States.
| |
Collapse
|
20
|
Abstract
Eye movements provide insights about a wide range of brain functions, from sensorimotor integration to cognition; hence, the measurement of eye movements is an important tool in neuroscience research. We describe a method, based on magnetic sensing, for measuring eye movements in head-fixed and freely moving mice. A small magnet was surgically implanted on the eye, and changes in the magnet angle as the eye rotated were detected by a magnetic field sensor. Systematic testing demonstrated high resolution measurements of eye position of <0.1°. Magnetic eye tracking offers several advantages over the well-established eye coil and video-oculography methods. Most notably, it provides the first method for reliable, high-resolution measurement of eye movements in freely moving mice, revealing increased eye movements and altered binocular coordination compared to head-fixed mice. Overall, magnetic eye tracking provides a lightweight, inexpensive, easily implemented, and high-resolution method suitable for a wide range of applications.
Collapse
Affiliation(s)
- Hannah L Payne
- Department of Neurobiology, Stanford University, Stanford, United States
| | - Jennifer L Raymond
- Department of Neurobiology, Stanford University, Stanford, United States
| |
Collapse
|
21
|
Audiovisual Modulation in Mouse Primary Visual Cortex Depends on Cross-Modal Stimulus Configuration and Congruency. J Neurosci 2017; 37:8783-8796. [PMID: 28821672 DOI: 10.1523/jneurosci.0468-17.2017] [Citation(s) in RCA: 53] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2017] [Revised: 07/04/2017] [Accepted: 08/01/2017] [Indexed: 02/03/2023] Open
Abstract
The sensory neocortex is a highly connected associative network that integrates information from multiple senses, even at the level of the primary sensory areas. Although a growing body of empirical evidence supports this view, the neural mechanisms of cross-modal integration in primary sensory areas, such as the primary visual cortex (V1), are still largely unknown. Using two-photon calcium imaging in awake mice, we show that the encoding of audiovisual stimuli in V1 neuronal populations is highly dependent on the features of the stimulus constituents. When the visual and auditory stimulus features were modulated at the same rate (i.e., temporally congruent), neurons responded with either an enhancement or suppression compared with unisensory visual stimuli, and their prevalence was balanced. Temporally incongruent tones or white-noise bursts included in audiovisual stimulus pairs resulted in predominant response suppression across the neuronal population. Visual contrast did not influence multisensory processing when the audiovisual stimulus pairs were congruent; however, when white-noise bursts were used, neurons generally showed response suppression when the visual stimulus contrast was high whereas this effect was absent when the visual contrast was low. Furthermore, a small fraction of V1 neurons, predominantly those located near the lateral border of V1, responded to sound alone. These results show that V1 is involved in the encoding of cross-modal interactions in a more versatile way than previously thought.SIGNIFICANCE STATEMENT The neural substrate of cross-modal integration is not limited to specialized cortical association areas but extends to primary sensory areas. Using two-photon imaging of large groups of neurons, we show that multisensory modulation of V1 populations is strongly determined by the individual and shared features of cross-modal stimulus constituents, such as contrast, frequency, congruency, and temporal structure. Congruent audiovisual stimulation resulted in a balanced pattern of response enhancement and suppression compared with unisensory visual stimuli, whereas incongruent or dissimilar stimuli at full contrast gave rise to a population dominated by response-suppressing neurons. Our results indicate that V1 dynamically integrates nonvisual sources of information while still attributing most of its resources to coding visual information.
Collapse
|
22
|
Kretschmer F, Tariq M, Chatila W, Wu B, Badea TC. Comparison of optomotor and optokinetic reflexes in mice. J Neurophysiol 2017; 118:300-316. [PMID: 28424291 DOI: 10.1152/jn.00055.2017] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2017] [Revised: 04/12/2017] [Accepted: 04/12/2017] [Indexed: 12/16/2022] Open
Abstract
During animal locomotion or position adjustments, the visual system uses image stabilization reflexes to compensate for global shifts in the visual scene. These reflexes elicit compensatory head movements (optomotor response, OMR) in unrestrained animals or compensatory eye movements (optokinetic response, OKR) in head-fixed or unrestrained animals exposed to globally rotating striped patterns. In mice, OMR are relatively easy to observe and find broad use in the rapid evaluation of visual function. OKR determinations are more involved experimentally but yield more stereotypical, easily quantifiable results. The relative contributions of head and eye movements to image stabilization in mice have not been investigated. We are using newly developed software and apparatus to accurately quantitate mouse head movements during OMR, quantitate eye movements during OKR, and determine eye movements in freely behaving mice. We provide the first direct comparison of OMR and OKR gains (head or eye velocity/stimulus velocity) and find that the two reflexes have comparable dependencies on stimulus luminance, contrast, spatial frequency, and velocity. OMR and OKR are similarly affected in genetically modified mice with defects in retinal ganglion cells (RGC) compared with wild-type, suggesting they are driven by the same sensory input (RGC type). OKR eye movements have much higher gains than the OMR head movements, but neither can fully compensate global visual shifts. However, combined eye and head movements can be detected in unrestrained mice performing OMR, suggesting they can cooperate to achieve image stabilization, as previously described for other species.NEW & NOTEWORTHY We provide the first quantitation of head gain during optomotor response in mice and show that optomotor and optokinetic responses have similar psychometric curves. Head gains are far smaller than eye gains. Unrestrained mice combine head and eye movements to respond to visual stimuli, and both monocular and binocular fields are used during optokinetic responses. Mouse OMR and OKR movements are heterogeneous under optimal and suboptimal stimulation and are affected in mice lacking ON direction-selective retinal ganglion cells.
Collapse
Affiliation(s)
- Friedrich Kretschmer
- Retinal Circuit Development & Genetics Unit, Neurobiology Neurodegeneration & Repair Laboratory, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Momina Tariq
- Retinal Circuit Development & Genetics Unit, Neurobiology Neurodegeneration & Repair Laboratory, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Walid Chatila
- Retinal Circuit Development & Genetics Unit, Neurobiology Neurodegeneration & Repair Laboratory, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Beverly Wu
- Retinal Circuit Development & Genetics Unit, Neurobiology Neurodegeneration & Repair Laboratory, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Tudor Constantin Badea
- Retinal Circuit Development & Genetics Unit, Neurobiology Neurodegeneration & Repair Laboratory, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| |
Collapse
|
23
|
Kretschmer F, Sajgo S, Kretschmer V, Badea TC. A system to measure the Optokinetic and Optomotor response in mice. J Neurosci Methods 2015; 256:91-105. [PMID: 26279344 DOI: 10.1016/j.jneumeth.2015.08.007] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2015] [Revised: 08/04/2015] [Accepted: 08/05/2015] [Indexed: 01/12/2023]
Abstract
BACKGROUND Visually evoked compensatory head movements (Optomotor responses) or eye movements (Optokinetic responses) are extensively used in experimental mouse models for developmental defects, pathological conditions, and testing the efficacy of therapeutic manipulations. NEW METHOD We present an automated system to measure Optomotor and Optokinetic responses under identical stimulation conditions, enabling a direct comparison of the two reflexes. A semi-automated calibration procedure and a commercial eye tracker are used to record angular eye velocity in the restrained animal. Novel video tracking algorithms determine the location of the mouse head in real time and allow repositioning of the stimulus relative to the mouse head. RESULTS Optomotor and Optokinetic responses yield comparable results with respect to determining visual acuity in mice. Our new head tracking algorithms enable a far more accurate analysis of head angle determination, and reveal individual head retractions, analogous to saccadic eye movements observed during Optokinetic Nystagmus. COMPARISON WITH EXISTING METHODS To our knowledge this is the first apparatus allowing the direct comparison of Optomotor and Optokinetic responses in mice. Our tracking algorithms, which allow an objective determination of head movements are a significant increment over existing systems which rely on subjective human observation. The increased accuracy of the novel algorithms increases the robustness of automated Optomotor response determinations and reveals novel aspects of this reflex. CONCLUSIONS We provide the blueprints for inexpensive hardware, and release open source software for our system, and describe an accurate and accessible method for Optomotor and Optokinetic response determination in mice.
Collapse
Affiliation(s)
- Friedrich Kretschmer
- Retinal Circuit Development & Genetics Unit, Neurobiology Neurodegeneration & Repair Laboratory, National Eye Institute, National Institute of Health, Bethesda, MD, USA.
| | - Szilard Sajgo
- Retinal Circuit Development & Genetics Unit, Neurobiology Neurodegeneration & Repair Laboratory, National Eye Institute, National Institute of Health, Bethesda, MD, USA
| | - Viola Kretschmer
- Retinal Circuit Development & Genetics Unit, Neurobiology Neurodegeneration & Repair Laboratory, National Eye Institute, National Institute of Health, Bethesda, MD, USA
| | - Tudor C Badea
- Retinal Circuit Development & Genetics Unit, Neurobiology Neurodegeneration & Repair Laboratory, National Eye Institute, National Institute of Health, Bethesda, MD, USA.
| |
Collapse
|
24
|
Rosselli FB, Alemi A, Ansuini A, Zoccolan D. Object similarity affects the perceptual strategy underlying invariant visual object recognition in rats. Front Neural Circuits 2015; 9:10. [PMID: 25814936 PMCID: PMC4357263 DOI: 10.3389/fncir.2015.00010] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2014] [Accepted: 02/23/2015] [Indexed: 12/04/2022] Open
Abstract
In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness). In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant) to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: (i) smaller and more scattered; (ii) only partially preserved across object views; and (iii) only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning.
Collapse
Affiliation(s)
- Federica B Rosselli
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA) Trieste, Italy
| | - Alireza Alemi
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA) Trieste, Italy ; Department of Applied Science and Technology, Center for Computational Sciences, Politecnico di Torino Torino, Italy ; Human Genetics Foundation Torino, Italy
| | - Alessio Ansuini
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA) Trieste, Italy
| | - Davide Zoccolan
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA) Trieste, Italy
| |
Collapse
|
25
|
Goltstein PM, Montijn JS, Pennartz CMA. Effects of isoflurane anesthesia on ensemble patterns of Ca2+ activity in mouse v1: reduced direction selectivity independent of increased correlations in cellular activity. PLoS One 2015; 10:e0118277. [PMID: 25706867 PMCID: PMC4338011 DOI: 10.1371/journal.pone.0118277] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2014] [Accepted: 01/04/2015] [Indexed: 01/22/2023] Open
Abstract
Anesthesia affects brain activity at the molecular, neuronal and network level, but it is not well-understood how tuning properties of sensory neurons and network connectivity change under its influence. Using in vivo two-photon calcium imaging we matched neuron identity across episodes of wakefulness and anesthesia in the same mouse and recorded spontaneous and visually evoked activity patterns of neuronal ensembles in these two states. Correlations in spontaneous patterns of calcium activity between pairs of neurons were increased under anesthesia. While orientation selectivity remained unaffected by anesthesia, this treatment reduced direction selectivity, which was attributable to an increased response to the null-direction. As compared to anesthesia, populations of V1 neurons coded more mutual information on opposite stimulus directions during wakefulness, whereas information on stimulus orientation differences was lower. Increases in correlations of calcium activity during visual stimulation were correlated with poorer population coding, which raised the hypothesis that the anesthesia-induced increase in correlations may be causal to degrading directional coding. Visual stimulation under anesthesia, however, decorrelated ongoing activity patterns to a level comparable to wakefulness. Because visual stimulation thus appears to 'break' the strength of pairwise correlations normally found in spontaneous activity under anesthesia, the changes in correlational structure cannot explain the awake-anesthesia difference in direction coding. The population-wide decrease in coding for stimulus direction thus occurs independently of anesthesia-induced increments in correlations of spontaneous activity.
Collapse
Affiliation(s)
- Pieter M. Goltstein
- Center for Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands
- Research Priority Program Brain and Cognition, University of Amsterdam, Amsterdam, The Netherlands
| | - Jorrit S. Montijn
- Center for Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands
- Research Priority Program Brain and Cognition, University of Amsterdam, Amsterdam, The Netherlands
| | - Cyriel M. A. Pennartz
- Center for Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, The Netherlands
- Research Priority Program Brain and Cognition, University of Amsterdam, Amsterdam, The Netherlands
- * E-mail:
| |
Collapse
|
26
|
Zoccolan D. Invariant visual object recognition and shape processing in rats. Behav Brain Res 2015; 285:10-33. [PMID: 25561421 PMCID: PMC4383365 DOI: 10.1016/j.bbr.2014.12.053] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2014] [Revised: 12/19/2014] [Accepted: 12/25/2014] [Indexed: 12/28/2022]
Abstract
Invariant visual object recognition is the ability to recognize visual objects despite the vastly different images that each object can project onto the retina during natural vision, depending on its position and size within the visual field, its orientation relative to the viewer, etc. Achieving invariant recognition represents such a formidable computational challenge that is often assumed to be a unique hallmark of primate vision. Historically, this has limited the invasive investigation of its neuronal underpinnings to monkey studies, in spite of the narrow range of experimental approaches that these animal models allow. Meanwhile, rodents have been largely neglected as models of object vision, because of the widespread belief that they are incapable of advanced visual processing. However, the powerful array of experimental tools that have been developed to dissect neuronal circuits in rodents has made these species very attractive to vision scientists too, promoting a new tide of studies that have started to systematically explore visual functions in rats and mice. Rats, in particular, have been the subjects of several behavioral studies, aimed at assessing how advanced object recognition and shape processing is in this species. Here, I review these recent investigations, as well as earlier studies of rat pattern vision, to provide an historical overview and a critical summary of the status of the knowledge about rat object vision. The picture emerging from this survey is very encouraging with regard to the possibility of using rats as complementary models to monkeys in the study of higher-level vision.
Collapse
Affiliation(s)
- Davide Zoccolan
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), 34136 Trieste, Italy.
| |
Collapse
|
27
|
Vermaercke B, Gerich FJ, Ytebrouck E, Arckens L, Op de Beeck HP, Van den Bergh G. Functional specialization in rat occipital and temporal visual cortex. J Neurophysiol 2014; 112:1963-83. [PMID: 24990566 DOI: 10.1152/jn.00737.2013] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Recent studies have revealed a surprising degree of functional specialization in rodent visual cortex. Anatomically, suggestions have been made about the existence of hierarchical pathways with similarities to the ventral and dorsal pathways in primates. Here we aimed to characterize some important functional properties in part of the supposed "ventral" pathway in rats. We investigated the functional properties along a progression of five visual areas in awake rats, from primary visual cortex (V1) over lateromedial (LM), latero-intermediate (LI), and laterolateral (LL) areas up to the newly found lateral occipito-temporal cortex (TO). Response latency increased >20 ms from areas V1/LM/LI to areas LL and TO. Orientation and direction selectivity for the used grating patterns increased gradually from V1 to TO. Overall responsiveness and selectivity to shape stimuli decreased from V1 to TO and was increasingly dependent upon shape motion. Neural similarity for shapes could be accounted for by a simple computational model in V1, but not in the other areas. Across areas, we find a gradual change in which stimulus pairs are most discriminable. Finally, tolerance to position changes increased toward TO. These findings provide unique information about possible commonalities and differences between rodents and primates in hierarchical cortical processing.
Collapse
Affiliation(s)
- Ben Vermaercke
- Laboratory of Biological Psychology, KU Leuven, Leuven, Belgium; and
| | - Florian J Gerich
- Laboratory of Biological Psychology, KU Leuven, Leuven, Belgium; and
| | - Ellen Ytebrouck
- Laboratory of Neuroplasticity and Neuroproteomics, KU Leuven, Leuven, Belgium
| | - Lutgarde Arckens
- Laboratory of Neuroplasticity and Neuroproteomics, KU Leuven, Leuven, Belgium
| | | | | |
Collapse
|
28
|
Abstract
The ability to recognize objects despite substantial variation in their appearance (e.g., because of position or size changes) represents such a formidable computational feat that it is widely assumed to be unique to primates. Such an assumption has restricted the investigation of its neuronal underpinnings to primate studies, which allow only a limited range of experimental approaches. In recent years, the increasingly powerful array of optical and molecular tools that has become available in rodents has spurred a renewed interest for rodent models of visual functions. However, evidence of primate-like visual object processing in rodents is still very limited and controversial. Here we show that rats are capable of an advanced recognition strategy, which relies on extracting the most informative object features across the variety of viewing conditions the animals may face. Rat visual strategy was uncovered by applying an image masking method that revealed the features used by the animals to discriminate two objects across a range of sizes, positions, in-depth, and in-plane rotations. Noticeably, rat recognition relied on a combination of multiple features that were mostly preserved across the transformations the objects underwent, and largely overlapped with the features that a simulated ideal observer deemed optimal to accomplish the discrimination task. These results indicate that rats are able to process and efficiently use shape information, in a way that is largely tolerant to variation in object appearance. This suggests that their visual system may serve as a powerful model to study the neuronal substrates of object recognition.
Collapse
|
29
|
Andermann ML, Kerlin AM, Roumis DK, Glickfeld LL, Reid RC. Functional specialization of mouse higher visual cortical areas. Neuron 2012; 72:1025-39. [PMID: 22196337 DOI: 10.1016/j.neuron.2011.11.013] [Citation(s) in RCA: 288] [Impact Index Per Article: 22.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/22/2011] [Indexed: 10/14/2022]
Abstract
The mouse is emerging as an important model for understanding how sensory neocortex extracts cues to guide behavior, yet little is known about how these cues are processed beyond primary cortical areas. Here, we used two-photon calcium imaging in awake mice to compare visual responses in primary visual cortex (V1) and in two downstream target areas, AL and PM. Neighboring V1 neurons had diverse stimulus preferences spanning five octaves in spatial and temporal frequency. By contrast, AL and PM neurons responded best to distinct ranges of stimulus parameters. Most strikingly, AL neurons preferred fast-moving stimuli while PM neurons preferred slow-moving stimuli. By contrast, neurons in V1, AL, and PM demonstrated similar selectivity for stimulus orientation but not for stimulus direction. Based on these findings, we predict that area AL helps guide behaviors involving fast-moving stimuli (e.g., optic flow), while area PM helps guide behaviors involving slow-moving objects.
Collapse
Affiliation(s)
- Mark L Andermann
- Department of Neurobiology, Harvard Medical School, Goldenson 243, 220 Longwood Avenue, Boston, MA 02115, USA
| | | | | | | | | |
Collapse
|